Section 18.1: Early History of I/O (Frame 7)                     [prev][home][     ]

Input and output in real systems is horrendously complicated. First of all, devices need to communicate with the CPU in both directions because control signals flowing from the CPU to the device are matched by status signals flowing from the device to the CPU. Data is flowing one way or the other, or sometimes in both directions, as in the case of devices like disk drives.

And then there's timing. Real I/O devices take various amounts of time to work and the speed of the CPU must be matched to the speed of the devices or somebody will miss the signals by latching values on wires into registers at the wrong time. Early peripherals were built specifically to match the CPUs that would be using them, but modern peripherals must work with a variety of CPUs of diverse speeds. For example, one might buy a disk drive to be used with a 486 SX computer. Later, when upgrading to a Pentium 100 MHz, that old disk drive might still be used. But simply plugging it into the motherboard of the new computer is disastrous if the speed of the controller cannot match the speed of the bus. So peripheral manufacturers put lots of "smarts" into the circuitry so that the speed can either be set manually by small dip switches or in software, usually through some sort of initialization file that writes speed values into on-board registers during boot time. The modern trend, called plug 'n' play, is to have the peripheral and the CPU and the bus do all of this automatically by sending out messages announcing their presence and their bus address, along with their operating characteristics. This trend towards more general compatibility is called interoperability since the devices can inter-operate among themselves.