Again, the principle of locality saves the day. Since most memory references will be within a small number of pages, say 10, then a tiny cache of memory that is almost as fast as the general purpose registers can be maintained. In that cache the part of the page table that is getting used repeatedly can be stored. If 90% of all requests to translate addresses are within those pages whose translations are pre-computed and stored in this cache, the machine will only be slightly slower than a computer without virtual memory. This tiny cache is called a TLB, or Translation Lookaside Buffer. The way a TLB works is that those entries from the page table that were most recently referenced are copied into the TLB. Whenever the MMU makes an address translation, it first looks in the TLB. If the page number is there, meaning that it has been translated recently, the MMU pulls the frame number out of the TLB quickly and inserts it into the upper part of the MAR. If the page number is not there, however, the MMU must go out to the page table that is kept in main memory and find it. The MMU copies that entry into the TLB, hoping that addresses out of that same page will need to be translated again soon. |