Section 21.8
Hardware to compute floating point arithmetic

Fig. 21.8.1 shows circuitry to perform floating point addition. The two operands are held in registers at the left end of the picture and data flows towards the right. First, the exponents are compared using a comparator circuit (which is just a subtractor). If necessary, one of the values must be altered by shifting the mantissa and adding to its exponent. We don't know which one so we have two shifters and two adders to do this. Another possibility would have been to swap the two operands so that the smaller was always in, say the top, register, and just that one would have been adjusted.


Fig. 21.8.1: Hardware to do floating point addition

Next the mantissas are added and the new exponent is merely copied. The final result might be larger than 1, so it would have to be normalized by shifting the mantissa to the right and adding 1 to the exponent. This could lead to overflow, which we do not show.

Also, the sign circuitry is not shown. It is non-trivial because if the signs are different, subtraction must be performed instead of addition. The resulting sign depends upon the result of the subtraction if the two operand signs were different. Otherwise, the sign of the result is identical to the operand signs if they were originally the same. In fact, implementing subtraction is trivial since the addition hardware implements it anyway. To subtract A-B, just change the sign of B and perform addition.

Multiplication is done by hardware in Fig. 21.8.2. The sign computation is explicitly shown since it is so simple. Exclusive or produces a 1 only if its inputs are different, which nicely conforms to the sign rule of algebra.


Fig. 21.8.2: Hardware to do floating point multiplication

In a surprising twist, multiplication is much easier than addition when it comes to floating point numbers, whereas opposite is true with regards to integer multiplication and addition. Of course, if you look carefully at Figures 24.8.1 and 24.8.2, you will see adders and multipliers in these circuits. These are integer adders and integer multipliers, since all arithmetic on the exponents and on the mantissas is integer based.

Another feature of the circuits in Figures 24.8.1 and 24.8.2 is that intermediate results of the computations are stored in registers, permitting pipelining. Many supercomputers need to do a billion or more float point operations per second in order to accomplish their scientific simulations, and pipelining is the only way that this can be done. The term FLOPS is used to measure the speed of these monster computers, such as the Cray 3. FLOPS stands for Floating Point Operations per second, so one gigaflops would be 1 billion floating point operations per second, not a bad rate! There is usually no distinguishing in these crude measures between floating point addition and multiplication.