Problem 1


From Tanenbaum, Chapter 3: If a process has a probability p of being idle while waiting for I/O, then N processes have a probability of p^N of all being idle at the same time. We know that we have 4 processes (N = 4), and each has is idle an estimated 1/2 of the time (p = 0.5). Thus, combined, there will be 6.25% (0.5^4) of the CPU time wasted.

Problem 2


First Fit
12K -> 20K
10K -> 10K
9K -> 18K

Best Fit
12K -> 12K
10K -> 10K
9K -> 9K

Worst Fit
12K -> 20K
10K -> 18K
9K -> 15K

Next Fit
12K -> 20K
10K -> 18K
9K -> 9K

Problem 6


If a page is unmapped when referenced by a process, the CPU will trap the OS. The OS then exchanges a little-used section of memory for the requested section. In this case, (k) instructions will successfully execute (k-1 instructions + one instruction from the successful finishing of the last page fault), taking (k) microsecs. On the kth instruction, (n) microsecs will be wasted. Thus there will be (k) out of (k+n) microsecs of useful instruction.

Problem 7


From page 323, we see that in 32 bit address space, with 4k pages, there are ~1 million entries. Thus with 8k pages (more memory required to address) there can be only ~500,000 entries. Thus since each entry takes 100 nanoseconds (100*1/1,000,000,000 seconds) to copy, we multiply these all together to get the total page table copy time:
500,000 * 100 * 1/1,000,000,000 = 0.05 seconds
Now the book uses confusing notation here. Microseconds (1/1,000,000 seconds) are represented by microsec while milliseconds (1/1,000 seconds) are represented by msec.
Thus, when it says that each process runs for 100 msec, it is actually saying that each process runs for .1 second. As it takes 0.05 seconds to load the page table, we divide this by the total time of a process (.1 second) and we see that the page table takes 50% of total process time or 5/10 or 1/2.

Problem 11


What we are looking for here is an average. Thus you should set up the problem as x hits to the page table and y hits to the TLB. Now the problem looks like this:
(500 * x + 100 * y) / x + y = 200
500x + 100y = 200x + 200y
300x = 100y
3x = y

From this result, we say that there must be 3 times as many hits to the TLB as to the page table in order to reduce the average overhead to 200 nsec.

Problem 14


Page Loaded Last ref. R M
0 126 279 0 0
1 230 260 1 0
2 120 272 1 1
3 160 280 1 1
A)NRU will remove page 0 as it is in class 0 (M = R = 0). If there were more than one page in class 0, NRU would remove one at random.
B)FIFO will remove page 2 as it was loaded before any others.
C)LRU will replace page 1 as it has not been referenced in the longest time.
D)Second Chance will replace page 0 as it is the earliest created page with the R bit is set to 0.

Problem 21


What the guru is suggesting is allowing (or expecting) a large number of page faults. When the operating system needs to create text for output, it will reference the expected page, which trips a page fault, causing the OS to read the page from disk. We can easily see that this would cause an greater number of wasted clock cycles. We can calculate this by using a couple of assumptions. First, the average seek time of a disk drive is 10 milliseconds. The average speed of memory is 60 nanoseconds. There are 1,000 nanoseconds in 1 millisecond. Thus the 10 millisecond seek time of a hard drive corresponds to 10,000 nanoseconds compared to 60 nanoseconds of memory. Thus a hard drive is 166 times slower than memory. Another way of saying it is that memory would use just 6/10ths of a percent of the time used by the disk.


If you have any questions on these solutions, send me an e-mail at: s-hillbrand@uchicago.edu