Have you ever wondered why computers have to format disks or diskettes, and why it takes so long? The reason is because the disk drive must write the initial sectors and tracks on the platter(s) so that when it comes time to write real data, it will know where to put it. Older disks were hard-sectored, which meant that there were holes in the floppies or some kind of reflective line in a hard disk so that the disk drive knew where the sectors started. Nowadays, soft-sectoring is used instead, where the disk drive doesn't rely on any physical indication of the start of sectors, but instead writes a special bit-pattern on the surface to indicate the beginning of sectors. Similarly, the dead space is really just a special bit pattern.
The ability to soft-sector a disk and to use a special bit pattern to indicate "there's no data here" implies that disk drives do not write bare 1's and 0's on the platter. That is, if a disk drive were to write the ASCII character 'A', which is 65, or 01000001, it would not actually write 01000001 onto the surface. In fact, it wouldn't write "bits" at all, but rather align the magnetic fields of the iron-paint, so that it might look like NSNNNNNS, where N means "North" and S means "South."
In order to use non-data bit patterns, disk drives use other kinds of encoding of the bit data, These encodings seem to waste space, but they are preferable to hard sectoring or other methods. One such bit encoding might use 00001111 to signal dead space and 00010001 to signal that the next group of bits begins the sector. Then the actual bits are encoded using 10 for 0, and 01 for 1. Thus, 'A' would be encoded as 1001101010101001. This halves the raw bit capacity of the disk drive, but for many reasons this sacrifice is worthwhile.
One problem in electronics is drift, where the circuits lose the bit boundaries in a long stream of pure 1's or pure 0's. The disk drive is spinning quite rapidly, but there are often minor variations in speed which make it impossible for the electronics to stay perfectly in sync for more than about 8 bits. Thus, a string of ASCII NULLs (00000000) or all 1's would confuse the disk drive and its attached controller.
Fig. 17.3.1 illustrates how two computers which are communicating over a modem, or two devices communicating over a bus, can misinterpret the bit stream if their clocks are not perfectly in synchronization. Since the receiver's idea of a bit slice is "wider" than the sender's, the receiver will sample parts of the incoming signal at later and later times, eventually skipping an entire bit.
Fig. 17.3.1: Clock drift causes the receiver to skip an entire bit
Bit encodings that cause frequent transitions between 1 and 0 solve this problem, because the disk drive knows when there is a transition from 0 to 1 or 1 to 0 that a bit boundary is found there. ASCII NULL would be 1010101010101010, which gives regular and frequent transitions. In fact, the bit encoding we have been using, which is called Manchester encoding, never puts more than two 1s or two 0s in a row.
Fig. 17.3.2 illustrates a bit stream encoded using Manchester encoding. The receiver still has a clock and it senses the signal level regularly. But Manchester encoding never allows three half-bit time intervals to have the same signal level, so if the receiver sense this condition, it knows its clock is drifting. Therefore, it takes corrective action and resets its time interval.