Two very dumb questions, I’m sure... [help]

Computers have signed and unsigned numbers. For signed numbers, they use a sign bit to represent whether the number is positive or negative. The sign bit is typically the most significant bit.

As a contrived example, a 4-bit unsigned number (0000 to 1111) represents 0 to 15. A 4-bit signed number represents -7 to 7. 0000 to 0111 still represent 0 to 7, then 1001-1111 represent -1 to -7. Note that signed numbers support a signed zero, and 1000 in this example is -0.

Absolute value involves bit arithmetic. But it’s basically testing whether the most significant bit is 1, and changing it to 0 if that’s the case.

Using a 4-bit signed int again, consider 0111 and 1111. We want to do (x XOR y) - y.

For x = 0111, y = 0000: (0111 XOR 0000) - 0000 = 0111 - 0000 = 0111

For x = 1111, y = 1111: (1111 XOR 1111) - 1111 = 0000 - 1111 = 0111

0000 - 1111 = 0111 may seem counterintuitive, but remember these are signed numbers, so 0000 - 1111 really means 0 - -7, or 0 + 7.

/r/AskComputerScience Thread