Another problem that arises with peripherals is how the operating system talks to them. Though there are protocols, there are many commands that the CPU can issue to the peripheral to make it work in the desired way. Special subroutines, called device drivers, are incorporated into the operating system code to issue commands to peripherals.
For example, in UNIX the system call read() is used to get input from virtually any type of peripheral: disk drive, keyboard, tape, network controller card, etc. Once UNIX begins to do the read system call, it inspects the fd (file descriptor) number given as the first parameter and determines what type of device this is by looking into system tables. This tells it which device driver to use in issuing commands. The device driver code begins to execute and further processes the parameters and then issues low-level commands to the peripheral to accomplish the data transfer.
Peripheral devices are usually composed of two pieces: the actual device which does the input or output, and accompanying control circuitry called the controller. Controllers can be quite extensive and are oftentimes computers in their own right. As an example, the old Commodore 64 computer had a 6502 chip as its main processor. When a floppy disk drive was added later, the disk drive was encased in its own box and connected to the C-64 via a serial bus wire. But the controller to the disk drive was another 6502 computer!
A controller makes it much easier to write device drivers because it allow the commands coming from the OS to be somewhat high level. For each such command, the controller issues sequences of very minute and specific instructions to the device. For example, a controller for a floppy disk drive must turn the main motor off when a read or write is requested. It must also monitor that the motor is not left running too long because floppy disks are in constant contact with the read/write head so they will wear out quickly if they spin all the time. (Hard disks are usually spinning all the time so this is not an issue.) Then the stepper motor must be turned on and off in order to advance the read/write head to the next. Finally, the data must be gated to the bus at the right time and status information delivered back to the OS.
Another reason why controllers are used to "micro-manage" the actual peripheral devices is because sometimes the devices are running fast enough that they need lots of attention in a timely fashion. With the CPU often doing other tasks and monitoring other devices, some control signals might be missed with disastrous consequences. Therefore, it makes sense to relegate fine control to a special processor chip and allow the OS to make broad, abstract commands like read sector 57 of track 91. Even that is a fairly detailed command, from the user's point of view. The operating system is the final shield of such messy details from the user.
Thus there is a hierarchy of control mechanisms and a ladder of abstraction, going from the most abstract command in the user program to a much more specific one in the operating system to a command in the controller, which then issues the lowest level of commands by setting wires to 1 and 0 at the appropriate times.
Device driver programming is a specialty niche that does not appeal to all programmers, since it involves many tedious and minute problems. A high degree of reliability as well as efficiency and economy are required of device driver programs, so sloppy or careless programmers need not apply for those jobs. However, it could be extremely rewarding to know that millions of people will depend upon a little snippet of code which is executed billions of times each day.