There is nothing magical about how ASCII or EBCDIC were developed. Somebody just assigned small integers to the printing symbols, such as "A" or "%". A logical choice would have been to assign 1 to A, 2 to B, and so forth, then 27 to the other characters that are not in the alphabet but are nonetheless important, such as ?, * and @. Before the 8-bit byte became standard, there was no consensus on how large codewords should be. Using only capital letters of the English alphabet and digits, we get 26+10=36 codewords. Since 36 is not a power of 2, 6 bits would be needed anyway, so the remaining unused 6-bit patterns should be made useful. Various punctuation marks were assigned, such as ?, @, ", ', etc. 26 is 64, which means that there would 28 additional symbols in the coded alphabet (64-36=28). This is still not enough to include all the punctuation symbols commonly found on modern keyboards, so some things were left out, causing for hilarious consequences. For example, IBM 026 keypunch machines did not have < or > signs, so FORTRAN, developed in 1957, used .LT. and .GT., and still does to this day. |