Relocation mechanisms such as base/offset addressing slow down the computer. Since every address generated by a user instruction must be added to the contents of the base address register, the delay of the adder will cause memory accesses to take longer. There are ways to speed up adders by adding extra Boolean logic, called a carry lookahead, or CLA. But there is always some drawback to these new features. One of the common themes in this course is flexibility costs.
Relocation introduces overhead into the computer. Overhead can be defined as the increment in resources (time in this case) that is required by a more complex system to do essentially the same task on a simpler system. For example, in the CSC-1, all addresses that the program generates are real already -- nothing has to be added to them. Of course the CSC-1 cannot run more than one program at a time. Running the same user program on the CSC-1 and on a machine with run-time relocation shows the CSC-1 to be faster. Suppose that the program on the CSC-1 completes in 8 seconds, and the same program on the other machine completes in 10 seconds. Then we would say that there was a 25% slowdown of the program on the new machine, calculated by taking (10-8)/8 = 2/8 = .25 = 25%
We could also say that there is a 25% overhead on the new machine due to the memory relocation mechanism. To calculate overhead, run the same program on both machines and find the difference. This difference, which is the extra time required by the new hardware, is the overhead, or time "wasted" by the new mechanism. In order to make comparisons, we standardize these differences by dividing them by the slower running time to come up with a percentage.
The term waste is misleading as used above. It says that running the same program on two machines gives different timings, and all other things being equal, we would prefer the faster machine. Of course, things really aren't equal, because multiprogramming can't be done on the older, faster machine unless all the addresses are rewritten, which introduces delays and problems of its own. So we use the term overhead instead of waste and mean by it the cost in performance that we have to pay in order to get the new capabilities.
Overhead is also used to apply to operating systems in general. Every microsecond spent doing an instruction in the OS code is one that could have been spent doing user code, or so one might think. However, it is impossible to imagine living without the services of the operating system anymore so we have to be willing to let it run some of the time. Moreover, the OS lets several programs use the overall system, including CPU, main memory, I/O controllers and I/O devices, much more efficiently than just one program could. Again, we are willing to live with a certain overhead because the alternative is to go back to the dark ages of computing and batch processing.
The overhead of modern operating systems is shocking, often as high as 50% to 75%. That is, 3 out of every four machine instructions is spent running OS code instead of user code. A goal of every software house and computer vendor is to give the user "the most bang for the buck", and reducing overhead is an obvious way to do this. But sometimes the kinds of services we demand of the OS require very complicated algorithms that inevitably introduce overhead.