There are two limitations on floating point numbers which are represented in this or any similar way due to the finite space within a computer's memory and circuits:
Looking at the number line again, magnitude limitations place a boundary on how far to the left or right numbers can go, while the precision determines how far apart the representable numbers are from each other. Fig. 21.3.1 shows the regions of the real number line that can be represented based on the exponent.
Fig. 21.3.2 shows that the number of bits in the mantissa determine how close two floating point numbers can be to one another. Though there are an infinite number of real numbers between them, not all are representable because there is no limit theoretically on the number of places after the decimal point. In any physical computer system, there must be due to finite space.
Representation of real numbers as floating points is a matter of trade-offs and economics. When we need super precise numbers, we buy a different computer or write slower software. Most computers have at least two forms of floating point numbers: one short (single precision) and another long (double precision). Some computers even give you the choice of how large of an exponent to store. Many real-world and engineering applications do not require enormously precise values.
The IEEE 754 standard, used by many vendors and programming languages like Java, is a document that specifies a way to represent floating point numbers in a computer. Here are its limits given in decimal:
# bits # decimal type total digits exponent range ----------------------------------------------------------------- single precision 32 6 10-38 to 10+38 double precision 64 19 10-308 to 10+308