In a surprising twist, multiplication is much easier than addition when it comes to floating point numbers, whereas opposite is true with regards to integer multiplication and addition. Of course, if you look carefully at Figures 24.8.1 and 24.8.2, you will see adders and multipliers in these circuits. These are integer adders and integer multipliers, since all arithmetic on the exponents and on the mantissas is integer based. Another feature of the circuits in Figures 24.8.1 and 24.8.2 is that intermediate results of the computations are stored in registers, permitting pipelining. Many supercomputers need to do a billion or more float point operations per second in order to accomplish their scientific simulations, and pipelining is the only way that this can be done. The term FLOPS is used to measure the speed of these monster computers, such as the Cray 3. FLOPS stands for Floating Point Operations per second, so one gigaflops would be 1 billion floating point operations per second, not a bad rate! There is usually no distinguishing in these crude measures between floating point addition and multiplication. |