Not all devices have the same speeds or characteristics. Some like keyboards are extremely slow compared to CPU speeds. A human typing at the rate of 100 words per minute is pressing keys at the rate of 8 per second. Each keystroke generates a byte that is sent to the CPU via an interrupt. In the space of 1/8 of a second, or 0.125 seconds, a 1 MIPS processor could do 125,000 instructions. A new Seagate hard disk released in 1997 can transfer 10 megabytes per second making it ten times faster than a 1 MIPS computer.
High priority devices should get more and faster attention from the CPU. If both a high priority and a low priority device drive high the interrupt wire at the same time, the CPU should handle the high priority device first since it is likely to lose its data faster than the slower device. Likewise, when handling a low priority device, such as the keyboard, the CPU should allow higher priority devices to interrupt it, but not lower ones. For example, suppose that the CPU is in the middle of handling an interrupt from the keyboard. If an interrupt request from the hard disk comes in, the CPU should jump to the hard disk interrupt routine and take care of it, returning to the keyboard only when done with the hard disk. However, if the CPU is handling an interrupt from the hard disk, it should ignore all interrupts from lower priority devices, such as the keyboard and the printer.
Fig. 19.9.1 shows daisy-chaining as an alternative to explicit polling. In this system, the CPU is connected to three peripherals in a daisy chain.
All peripherals are hooked to the address bus and to an interrupt request wire. Whenever any of the peripherals wishes to interrupt the CPU, it drives that wire high (injects current into it to change its voltage level). Sometimes more than one will drive the wire high at the same time. The CPU then asserts INT ACK high, which goes into the first device, closest to the CPU. If this device were one that set INT REQ high, it will absorb the INT ACK signal and send out 0 on the INT ACK wire it to its neighbor. If it was not 1, then it sends out 1 on the INT ACK wire to its downwind neighbor. No device is allowed to proceed with the interrupt unless its incoming INT ACK wire is 1. The first interrupt requester in the line, the closest to the CPU, will get go while others will have to wait. Priority is thus established by position on the INT ACK chain.
When a peripheral gets permission to continue with the interrupt, it puts its id number on the address bus, which is used by the interrupt handling routine to identify who requested the interrupt. From that point on, the CPU communicates with the peripheral through memory mapped I/O.
Another way of identifying which device requested the interrupt is to have the device put a memory address on the address bus, and that address is inserted into the PC instead of always using the same fixed address. This requires that there be multiple interrupt handlers in memory. The handler that is executed implicitly identifies the device that requested the interrupt. However, it is too wasteful of memory to copy interrupt handlers for each instance of the same device, so there is still a need for an individual id number.
Daisy chaining makes it easy to assign priorities -- just string the cables in the order of first consideration. But it has the weakness that if one of the devices should fail, all those downwind of it will be cut off from the CPU. It is also difficult to change priorities since that would require actual recabling, instead of software changes. While it might seem obvious which devices should get highest priority, in reality it is not that easy to determine. Of course, this is true of any system with prioritized interrupts.
An even more serious weakness of daisy chaining is that it is difficult to mask out certain devices and ignore their interrupt requests. The next method solves these problems at the expense of more complex circuitry.