Some values can't fit within the number of bits that are set for the numbers. For example, 16 can't be expressed in fewer than 5 bits. If one of the bits assigned to be the sign, the size of the numbers that can be encoded is restricted even further. Thus, a 4-bit sign magnitude form, as shown in the second column above, can only accommodate 3 actual bits of binary number, since one of the 4 is the sign. 1112 is 7, so 4-bit sign-magnitude form can only accommodate -7 to +7. 8-bit sign magnitude can only accommodate -127 to +127, since 01111111 is 127. What is the range of numbers if we are using 12-bits? One strange thing about sign-magnitude form is that there are two values for 0. Using 4 bits, 0000 and 1000 both exist, although the second is -0. In real life, there is no -0, simply 0. Thus, one bit pattern is either meaningless or means the same as another bit pattern. If a computer would ever come up with 1000, or -0, as a result of a calculation, such as subtraction, it should convert it to 0000 just to be consistent and to ensure that it doesn't inadvertently think that 0 does not equal -0. |