Practice Exercise 12 Answers
-
A computer has 24-bit virtual addresses.
a.) What is the size of its virtual address space?
Virtual address space is 2 to the number of bits in the virtual
address. So 224 = 16 megabytes.
b.) How many 2K pages would there be in such an address space?
Breaking 16 megabytes up into 2K chunks gives us 8,192 pages.
2K = 2048 = 211
224 / 211
= 224-11 =
213 = 8192
c.) How many page table entries would there need to be?
There must be one page table entry for each page, so there are 8192
entries in the page table.
-
Suppose each entry in a page table takes 45 bytes. The virtual address
space is 1024 pages.
a.) How many bytes would there be in the virtual address space?
We don't know how many bytes there are in the virtual address space
unless we know how big each page is! Part c supposes they are 4K (4096)
long, so that would give us 1024 x 4096 = 4,194,304 bytes. This is 4
megabytes.
b.) How many bytes would a page table occupy?
The page table size (in bytes) is the number of entries times the
number of bytes in each entry. We know there are 1024 entries because there
are 1024 pages, so 1024 x 45 = 46,080 bytes.
c.) Suppose each page is 4K long. How many pages long would the page table
be in memory?
46,080 bytes needs 12 pages because 46,080 / 4096 = 11.25. Since we
cannot get a fractional page, we round it up to 12.
-
Suppose there are 31 bits in a virtual address. How big would the pages
have to be to keep a process's page table under 1 million entries long?
The number of pages in a virtual memory system is always a power of 2.
Since 220 = 1048576 and
219 = 524288,
we must settle for 524288 entries
because the problem required we keep it under a million. 31 bits in a
virtual address means there are 231 bytes in
the virtual memory and if there are 219 pages,
then each page would be 231 /
219,
which is 231-19 =
212,
which is 4096. Thus each page must be 4096 bytes long, or 4K.
-
Why is LRU a reasonable assumption for victim selection when an empty
frame is needed? When might it not be a good assumption?
LRU assumes that the page that was referenced farthest in the past will
unlikely be needed again in the future, so it is the safe one to throw out in
order to make room for a new page. This is reasonable because it probably is
a page that contains code executed much earlier in the program, perhaps an
initialization phase, or it contains data that was needed early on but not
now. In other words, the program has moved to another section of code or
another section of data memory and is referencing pages belonging to those
regions. Thus the older stuff is not needed and can be safely thrown out.
But this may break down if the program jumps around too much, such as
spaghetti code may do, or a very poorly written program. Also, if the
program is randomly referencing elements of a huge array, then LRU may not be
a good assumption, and in fact there may be no good assumption to help us
select the best victim. Randomness defeats the system and may seriously
erode the performance of a virtual memory system.
-
Suppose you are working with a small computer that has 8 pages of 1K
each in its virtual address space, and only 4 frames (also of 1K each) in
its physical memory, as shown in Fig. 12.2.1. Below is a
history of memory accesses, where R=read and W=write and the address is
shown. Write down at each point in time which page contains the memory
address that is used. Also write into the frame columns which page is
currently found in that frame. If the frame has been modified, put an x
next to the page number. Use LRU for victim selection, which means find the
oldest page that has not been used recently as the victim.
Frame #
Time Instruction Page 0 1 2 3
----+------------+--------+---------+----------+----------+---------+
1 | R 1000 | 0 | 0 | | | |
2 | R 5092 | 4 | 0 | 4 | | |
3 | W 513 | 0 | 0x | 4 | | |
4 | R 6144 | 6 | 0x | 4 | 6 | |
5 | R 8000 | 7 | 0x | 4 | 6 | 7 |
6 | W 2099 | 2 | 0x | 2x | 6 | 7 |
7 | R 57 | 0 | 0x | 2x | 6 | 7 |
8 | R 3199 | 3 | 0x | 2x | 3 | 7 |
9 | R 6100 | 5 | 0x | 2x | 3 | 5 |
10 | W 3099 | 3 | 0x | 2x | 3x | 5 |
11 | R 1000 | 0 | 0x | 2x | 3x | 5 |
12 | R 5000 | 4 | 0x | 4 | 3x | 5 |
13 | R 3000 | 2 | 0x | 4 | 3x | 2 |
Count the number of times a page on disk had to be read as it was being
put into a frame. This is a miss count because the page was
missed in the TLB. Subtract this from the total number of memory
accesses (which is 13 in this example) and get the hit count.
Divide the hit count by the total number of memory accesses to get the hit
ratio. (Your system will be much more efficient and faster running if the
hit ratio is close to 100%.)
There were 13 total memory accesses. 9 times we had to read a page
from disk, so 9 is the miss count. 13-9=4 is the hit count.
4/13 = .31, so the hit ratio is 31%, pretty dismal!
-
A computer has 32-bit virtual addresses. Pages are 8K long. How many
bits are devoted to the page offset and how many bits are devoted to the
page number?
Page length is 8K = 8192, and log28192 = 13,
so 13 bits needed for offset
within a page. This leaves 19 bits for the actual page #:
Thus there are a maximum of 219 (542288) pages.
-
A computer has 2Kbyte pages and 4,194,304 bytes (4 megabytes) of real
memory. Virtual addresses are 28 bits long. If the page table does not
include disk addresses in its in-memory copy, how many bytes would the page
table take? Each page table entry must start on a byte -- it cannot start
in the middle of a byte. Don't forget any bits in the page table.
There are 217 (131072) pages because 28-11=17:
so there are 131072 entries in the page table. Since real physical
addresses can only be 22 bits long (because there 4 Mbytes of real
memory), the frame number can only be 11 bits long:
So there are 13 bits in each page, 11 for the frame #, 1 for the
dirty bit and 1 for the present/absent bit. If each entry of the
page table must start on a byte boundary, we'll need 2 bytes (16
bits) per entry.
131072 x 2 = 262144 bytes
262144 / 1024 = 256 Kbytes
-
On average, each job leaves half of its last page unused, and there are
a maximum of 20 jobs in memory at any given time. Jobs have 40 pages in
their working set. Pages are 2Kbytes long and virtual addresses are 24 bits
long. What percentage of the memory is wasted due to internal
fragmentation?
20 jobs x 40 pages for a job = 800 pages, and 800 x 2048 = 1638400
bytes. Each page is 2048 bytes.
1/2 of the last page of each job is wasted (unused), or 1024 bytes.
This gives 1024 bytes x 20 jobs = 20480 bytes total are wasted.
20480 / 1638400 = 0.0125, or 1.25 % is wasted
-
Nowadays a memory read on a personal computer may takes 60 nsec.
Suppose it takes 10 nsec to examine the TLB and extract the frame number if
the desired page number is there. How long would a virtual memory system
take to read a word from memory assuming that every page number were in the
TLB?
60 nsec to read from memory, 10 nsec to examine TLB
10 + 60 = 70 nsec to read from memory if every page # is already in
TLB.
-
Continuing question 9 above, how long would a virtual memory system
take to read a word on average if 90% of the time, the desired page number
was already in the TLB? Take into account that if the desired page number
is missed from the cache, another memory access must be done (at 60 nsec per
access) to get the frame number out of the page table. Assume that the TLB
is always searched first, not in parallel with the page table in memory.
.90 x (10 + 60) page # already in TLB
+ .10 x (10 + 60 + 60) page # not in TLB, 2 mem reads
-----------------------
76 nsec
-
Redo question 10 above, using 95% instead of 90%. How much faster is
the average access time?
.95 x (10 + 60) page # already in TLB
+ .05 x (10 + 60 + 60) page # not in TLB, 2 mem reads
-----------------------
73 nsec
3/76 = .03947 = 4%
Under the 95% hit ratio assumption, memory reads are on
average 4% faster.