Operating Systems Three Easy Pieces 第一部分(虛擬化)


作業代碼:http://pages.cs.wisc.edu/~remzi/OSTEP/

http://pages.cs.wisc.edu/~remzi/OSTEP/Homework/homework.html

項目:https://github.com/remzi-arpacidusseau/ostep-projects

reading the famous papers of our field is certainly one of the best ways to learn.

 

the real point of education is to get you interested in something, to learn something more about the subject matter on your own and not just what you have to digest to get a good grade in some class.

 

We created these notes to spark your interest in operating systems, to read more about the topic on your own, to talk to your professor about all the exciting research that is going on in the field, and even to get involved with that research. It is a great field(!), full of exciting and wonderful ideas that have shaped computing history in profound and important ways. And while we understand this fire won’t light for all of you, we hope it does for many, or even a few. Because once that fire is lit, well, that is when you truly become capable of doing something great. And thus the real point of the educational process: to go forth, to study many new and fascinating topics, to learn, to mature, and most importantly, to find something that lights a fire for you.

文件系統:

The file system is the part of the OS in charge of managing persistent data

For example, when writing a C program, you might first use an editor (e.g., Emacs7 ) to create and edit the C file (emacs -nw main.c). Once done, you might use the compiler to turn the source code into an executable (e.g., gcc -o main main.c). When you’re finished, you might run the new executable (e.g., ./main). Thus, you can see how files are shared across different processes. First, Emacs creates a file that serves as input to the compiler; the compiler uses that input file to create a new executable file (in many steps – take a compiler course for details); finally, the new executable is then run.

 

 You might be wondering what the OS does in order to actually write to disk.

操作系統把設備驅動程序抽象了。

The file system has to do a fair bit of work: first figuring out where on disk this new data will reside, and then keeping track of it in various structures the file system maintains. Doing so requires issuing I/O requests to the underlying storage device, to either read existing structures or update (write) them. As anyone who has written a device driver8 knows, getting a device to do something on your behalf is an intricate and detailed process. It requires a deep knowledge of the low-level device interface and its exact semantics. Fortunately, the OS provides a standard and simple way to access devices through its system calls. Thus, the OS is sometimes seen as a standard library

文件系統采用許多不同的數據結構和訪問方法,從簡單列表到復雜的btree 

file systems employ many different data structures and access methods, from simple lists to complex btrees

我們的目標:

One of the most basic goals is to build up some abstractions in order to make the system convenient and easy to use。(Abstractions are fundamental to everything we do in computer science)

another way to say this is our goal is to minimize the overheads of the OS.

These overheads arise in a number of forms: extra time (more instructions) and extra space (in memory or on disk).

 

 有點明白進程為啥這么重要了。

程序--->進程的過程:

To truly understand how lazy loading of pieces of code and data works, you’ll have to understand more about the machinery of paging and swapping, topics we’ll cover in the future when we discuss the virtualization of memory. For now, just remember that before running anything, the OS clearly must do some work to get the important program bits from disk into memory

 當程序變為進程前,OS會申請內存(為了堆棧)(

操作系統也可能使用參數初始化堆棧;
具體來說,它會填寫參數

main()函數,即argc和argv數組)

Once the code and static data are loaded into memory, there are a few other things the OS needs to do before running the process. Some memory must be allocated for the program’s run-time stack (or just stack). As you should likely already know, C programs use the stack for local variables, function parameters, and return addresses; the OS allocates this memory and gives it to the process. The OS will also likely initialize the stack with arguments; specifically, it will fill in the parameters to the main() function, i.e., argc and the argv array。

OS可能還會申請堆(Heap)

The OS may also create some initial memory for the program’s heap. In C programs, the heap is used for explicitly requested dynamicallyallocated data;

具體的實現:programs request such space by calling malloc() and free it explicitly by calling free(). The heap is needed for data structures such as linked lists, hash tables, trees, and other interesting data structures. The heap will be small at first; as the program runs, and requests more memory via the malloc() library API, the OS may get involved and allocate more memory to the process to help satisfy such calls.

, in UNIX systems, each process by default has three open file descriptors, for standard input, output, and error

PCB的內容:

 

 

 

(這里的解釋不是很懂): Now the interesting part begins. The process calls the fork() system call, which the OS provides as a way to create a new process. The odd part: the process that is created is an (almost) exact copy of the calling process. That means that to the OS, it now looks like there are two copies of the program p1 running, and both are about to return from the fork() system call. The newly-created process (called the child, in contrast to the creating parent) doesn’t start running at main(), like you might expect (note, the “hello, world” message only got printed out once); rather, it just comes into life as if it had called fork() itself. You might have noticed: the child isn’t an exact copy. Specifically, although it now has its own copy of the address space (i.e., its own private memory), its own registers, its own PC, and so forth, the value it returns to the caller of fork() is different. Specifically, while the parent receives the PID of the newly-created child, the child is simply returned a 0. This differentiation is useful, because it is simple then to write the code that handles the two different cases (as above).

 

(還是不太理解exec()這個系統調用的作用)

 

 Shell 的運行過程(最重要的是fork()和exec()的組合):

The shell is just a user program4 . It shows you a prompt and then waits for you to type something into it. You then type a command (i.e., the name of an executable program, plus any arguments) into it; in most cases, the shell then figures out where in the file system the executable resides, calls fork() to create a new child process to run the command, calls some variant of exec() to run the command, and then waits for the command to complete by calling wait(). When the child completes, the shell returns from wait() and prints out a prompt again, ready for your next command。

For now, suffice it to say that the fork()/exec() combination is a powerful way to create and manipulate processes。

(花時間閱讀Man Pages對系統程序員來說很重要)

Spending some time reading man pages is a key step in the growth of a systems programmer;

 

單處理機。(CPU每次只運行一個程序)

 

操作系統如何實現分時系統?

we’ll also see where the “limited” part of the name arises from; without limits on running programs, the OS wouldn’t be in control of anything and thus would be “just a library” – a very sad state of affairs for an aspiring operating system!

 開機時,操作系統所做的事情:

The kernel does so by setting up a trap table at boot time. When the machine boots up, it does so in privileged (kernel) mode, and thus is free to configure machine hardware as need be. One of the first things the OS thus does is to tell the hardware what code to run when certain exceptional events occur. For example, what code should run when a hard-disk interrupt takes place, when a keyboard interrupt occurs, or when program makes a system call? The OS informs the hardware of the locations of these trap handlers, usually with some kind of special instruction. Once the hardware is informed, it remembers the location of these handlers until the machine is next rebooted, and thus the hardware knows what to do (i.e., what code to jump to) when system calls and other exceptional events take place。

 

OS如何切換進程。

問題的關鍵是:進程在CPU運行的時候,OS沒有占用CPU。(因為CPU每時每刻只能允許一個程序運行)那OS如何切換進程呢?

 

 One approach that some systems have taken in the past is known as the cooperative approach In this style, the OS trusts the processes of the system to behave reasonably. Processes that run for too long are assumed to periodically give up the CPU so that the OS can decide to run some other task.

 

Most processes, as it turns out, transfer control of the CPU to the OS quite frequently by making system calls

Thus, in a cooperative scheduling system, the OS regains control of the CPU by waiting for a system call or an illegal operation of some kind to take place

你可能會有疑問:那萬一程序陷入了無限循環咋辦?

You might also be thinking: isn’t this passive approach less than ideal? What happens, for example, if a process (whether malicious, or just full of bugs) ends up in an infinite loop, and never makes a system call? What can the OS do then?

 


 

2019-10-26

00:09:20

第四章課后題:

 

 

問題1:

 

 

 

 

 

 CPU使用率為100%


問題2:

 

 

 

CPU利用率為56%。 完成這兩個進程需要10,

問題3:

 

 

 

 發現:CPU利用率達到了100%,交換順序真的是太重要了!因為直接把CPU利用率從56%提高到了100%。

第4題:

 

 如果SWITCH_ON_END位開啟,那么在進行IO操作時,不會切換到CPU運行,此時就造成了CPU空閑。導致資源的浪費


 

第5題:

 

 如果SWITCH_ON_IN位開啟,那么在進行IO操作時,就會切換到CPU運行,此時就合理的利用了CPU


 

 

 

 

 CPU先取指執行,第一條取得訪問IO的指令,那么進程就切換到IO,此時CPU一直是空閑狀態,接着等待IO結束,此時又取指執行,又來了兩次。

所以單進程訪問IO時,會導致CPU嚴重的資源浪費,所以要引入多進程。

第7題:

 與第6題結果相同

第8題:

 

 

 

 

 

 


第7章 進程調度:習題

 

 

 

 

 

 

 問題1:

FIFO

 

 

 

 SJF

 

 

 

 2.

FIFO:

 

SJF:

 

 

 第3題:

 

 

 

 

 

 可以看到:RR調度算法的平均周轉時間和平均等待時間比SJF,FIFO算法長,而平均響應時間則比SJF和FIFO快的不是一點半點。

第4題:

第1題,和第2題可以看出,都可以滿足SJF與FIFO有着相同的周轉時間。

現在給出如下的例子:

 

 

 

 

 可以看出,此時SJF比FIFO更優,而且周轉時間SJF比FIFO快。

結論就是:只有當進入的進程的先后順序為:A,B,C,....,且A,B,C..的長度滿足A<=B<=C<=.....時,即依次遞增時,才會滿足SJF與FIFO有着相同的周轉時間。

 第5題:

記錄在書中

第6題:

時間分別是 500 ,400 ,300

 

 可以看到響應時間會增加。

第7題:

增加時間片長度至10.(從1 增加至 10,與第3題形成對比)

 

 

 

 可以看到平均響應時間從1至10,平均周轉時間沒變,而平均等待時間從265.67減少至了256.67


第7題:

測試:當時間片長度 > 每個進程的長度時 , RR算法會退化為FIFO 算法,效率會大大降低

RR:

 

 FIFO:

 

所以給出的方程式為:當有N個工作時,只有當RR的時間片  >= MAX(R1,R2,R3,....Rn)時,此時是最壞情況的響應時間

 


第八章

第1題 

 

 

 

 把時間片設置為20,可以看到

 

 

 


 第3題:

 

 

 

 要使mlfq調度程序像時間片輪轉調度那樣,只需要這幾個作業在一個優先級隊列即可

 

 第4題:(不會)

 --jlist 0,180,0 : 50,150,50, -q 10 -S -i 1 -I (不會)

 第5題:


 

彩票調度算法:

 

 應用在分布式上

 作業:

1.

 

 

 

 

 


第2題:

 

 

 

 

 

 彩票數量如此不平衡時,會發生“餓死”現象。工作1完成之前,工作0幾乎99%不可能完成。


 

第3題:

第一次模擬:                                                                                   第2次:

 

 

 工作0完成時間:192                                                                           工作0 完成時間:190

 工作1完成時間:200                                                                             工作1完成時間 : 200

第3次模擬:                                                                                      第4次模擬:

 

 

工作0完成時間:200                                                                        工作0完成時間:197

工作1完成時間:196                                                                         工作1完成時間:200

所以相對來說還是公平的,誤差都在10內


第4題:

-q 為 2 ,此時時間片大小為2,可以看到誤差減小了。

-q 為 3 , 此時誤差變大

 

-q 為 4,此時誤差變大

 

 

 

 -q 為 5 時:誤差變大

 

 


 第13章:

 

 

 

 看到的全部都是虛擬地址。


 第14章:

忘記分配內存

 

 

 

 


 

15章作業:

第1題:

 

 第2題:

 

 可以看出,為了確保所有生成的虛擬地址都處於邊界內,要將界限寄存器設置為1000即可

 

 

 


 第16章:

 

 

 

 第1題:

 

 

 

 

 


 第17章:

作業參考答案:https://github.com/ahmedbilal/OSTEP-Solution/tree/master/Chapter%2017

 

 第1題:

 

 

First run with the flags -n 10 -H 0 -p BEST -s 0 to generate a few random allocations and frees. Can you predict what alloc()/free() will return? Can you guess the state of the free list after each request? What do you notice about the free list over time?

Yes, it is predictable if we are told the following things in advances

  1. Base Address = 1000
  2. Heap Size = 100

Free list become conjested over time with smaller and smaller size of free space block.

Output

ptr[0] = Alloc(3) returned 1000 (searched 1 elements) Free List [ Size 1 ]: [ addr:1003 sz:97 ]

Free(ptr[0]) returned 0 Free List [ Size 2 ]: [ addr:1000 sz:3 ] [ addr:1003 sz:97 ]

ptr[1] = Alloc(5) returned 1003 (searched 2 elements) Free List [ Size 2 ]: [ addr:1000 sz:3 ] [ addr:1008 sz:92 ]

Free(ptr[1]) returned 0 Free List [ Size 3 ]: [ addr:1000 sz:3 ] [ addr:1003 sz:5 ] [ addr:1008 sz:92 ]

ptr[2] = Alloc(8) returned 1008 (searched 3 elements) Free List [ Size 3 ]: [ addr:1000 sz:3 ] [ addr:1003 sz:5 ] [ addr:1016 sz:84 ]

Free(ptr[2]) returned 0 Free List [ Size 4 ]: [ addr:1000 sz:3 ] [ addr:1003 sz:5 ] [ addr:1008 sz:8 ] [ addr:1016 sz:84 ]

ptr[3] = Alloc(8) returned 1008 (searched 4 elements) Free List [ Size 3 ]: [ addr:1000 sz:3 ] [ addr:1003 sz:5 ] [ addr:1016 sz:84 ]

Free(ptr[3]) returned 0 Free List [ Size 4 ]: [ addr:1000 sz:3 ] [ addr:1003 sz:5 ] [ addr:1008 sz:8 ] [ addr:1016 sz:84 ]

ptr[4] = Alloc(2) returned 1000 (searched 4 elements) Free List [ Size 4 ]: [ addr:1002 sz:1 ] [ addr:1003 sz:5 ] [ addr:1008 sz:8 ] [ addr:1016 sz:84 ]

ptr[5] = Alloc(7) returned 1008 (searched 4 elements) Free List [ Size 4 ]: [ addr:1002 sz:1 ] [ addr:1003 sz:5 ] [ addr:1015 sz:1 ] [ addr:1016 sz:84 ]

 

空閑列表將會被分割成許多小的段

第二題:

 

 

 可以看出,最差匹配算法產生的外部碎片比最優匹配還要多。

書上說:大多數研究表明它的表現非常差,導致過量的碎片

How are the results different when using a WORST fit policy to search the free list (-p WORST)? What changes?

Worst fit policy split and use the largest free memory block. Thus the returned small block were not used until the larger free mem. blocks are not splitted and used.

Output

ptr[0] = Alloc(3) returned 1000 List? [addr:1003, size:97]

Free(ptr[0]) returned 0 List? [(addr:1000, size:3), (addr:1003, size:97)]

ptr[1] = Alloc(5) returned 1003 List? [(addr:1000, size:3), (addr:1008, size:92)]

Free(ptr[1]) returned 0 List? [(addr:1000, size:3), (addr:1003, size:5), (addr:1008, size:92)]

ptr[2] = Alloc(8) returned 1008 List? [(addr:1000, size:3), (addr:1003, size:5), (addr:1016, size:84)]

Free(ptr[2]) returned 0 List? [(addr:1000, size:3), (addr:1003, size:5), (addr:1008, size:8), (addr:1016, size:84)]

ptr[3] = Alloc(8) returned 1016 List? [(addr:1000, size:3), (addr:1003, size:5), (addr:1008, size:8), (addr:1024, size:76)]

Free(ptr[3]) returned 0 List? [(addr:1000, size:3), (addr:1003, size:5), (addr:1008, size:8), (addr:1016, size:8)(addr:1024, size:76)]

ptr[4] = Alloc(2) returned 1024 List? [(addr:1000, size:3), (addr:1003, size:5), (addr:1008, size:8), (addr:1016, size:8)(addr:1026, size:74)]

ptr[5] = Alloc(7) returned 1026 List? [(addr:1000, size:3), (addr:1003, size:5), (addr:1008, size:8), (addr:1016, size:8)(addr:1033, size:67)]

第3題:

 

 

What about when using FIRST fit (-p FIRST)? What speeds up when you use first fit?

The lookup for enough empty space slot speeds up when we use first fit(首次使用時,查找足夠的空白空間的速度會加快). Because, we don't have to examine all the available slots. We just allocate the first fitting memory block to the job requestion memory in heap.

第4題:

For the above questions, how the list is kept ordered can affect the time it takes to find a free location for some of the policies. Use the different free list orderings (-l ADDRSORT, -l SIZESORT+, -l SIZESORT-) to see how the policies and the list orderings interact.

Time remain unaffected in both best and worst fit policy because these have to scan the whole list. However, in first fit only sizesort- (descending order) decrease the time to find a free location because first fit chooses the appropriate size free location that come first. So, if the largest free memory location is head of list then all allocation are from this free mem. block because it fulfills the requirement of first fit policy.(

時間對最佳和最差策略均不受影響,因為它們必須掃描整個列表。
但是,在首次試穿時,只有sizesort-(降序)會減少找到空閑位置的時間,因為首次試穿會選擇先出現的合適的免費尺碼。
因此,如果最大的空閑內存位置是列表的頭,那么所有分配都來自此空閑內存。
阻止,因為它符合“最適合政策”的要求。

遞減順序:                                                                                                              遞增順序:

 

 

 

 第5題:

Question 5

Coalescing of a free list can be quite important. Increase the number of random allocations (say to -n 1000). What happens to larger allocation requests over time? Run with and without coalescing (i.e., without and with the -C flag). What differences in outcome do you see? How big is the free list over time in each case? Does the ordering of the list matter in this case?

Larger allocation requests failed in free mem. mechanism without coalescing because in that case free list would be filled with small free mem. blocks. Without coalescing the free list became conjested with a lot of small free memory blocks. This conjestion also increase look up time in nearly all free mem. lookup policies. The ordering of the list matters only in first fit free mem. location policy.

 

Question 6

What happens when you change the percent allocated fraction -P to higher than 50? What happens to allocations as it nears 100? What about as it nears 0?

Than Most heap operation would be allocation operation. When we increase the percent allocated fraction to near 100 or 100. Then nearly all operations would be allocation operations. If we approach 0 the allocations and free operation are done in 50%. Because, we can't free memory that isn't allocated. So, to free mem. we have to allocate some first.

比大多數堆操作將是分配操作。
當我們將百分比分配分數增加到接近100或100時,幾乎所有操作都是分配操作。
如果我們接近0,則分配和自由操作將完成50%。
因為,我們無法釋放未分配的內存。
因此,要釋放內存。
我們必須先分配一些。

Question 7

What kind of specific requests can you make to generate a highly fragmented free space? Use the -A flag to create fragmented free lists, and see how different policies and options change the organization of the free list.

./malloc.py -n 6 -A +1,-0,+2,-1,+3,-2 -c
 
         
第18章:

第1題:

Question 1

Before doing any translations, let’s use the simulator to study how linear page tables change size given different parameters. Compute the size of linear page tables as different parameters change. Some suggested inputs are below; by using the -v flag, you can see how many page-table entries are filled. First, to understand how linear page table size changes as the address space grows:

paging-linear-translate.py -P 1k -a 1m -p 512m -v -n 0
paging-linear-translate.py -P 1k -a 2m -p 512m -v -n 0

 

paging-linear-translate.py -P 1k -a 4m -p 512m -v -n 0

 

 

Then, to understand how linear page table size changes as page size grows:

paging-linear-translate.py -P 1k -a 1m -p 512m -v -n 0
paging-linear-translate.py -P 2k -a 1m -p 512m -v -n 0

 

paging-linear-translate.py -P 4k -a 1m -p 512m -v -n 0

 

 

 頁的大小不應該太大,因為每個進程使用的內存不是太多,如果分配給每個進程太大的頁,則會造成內存浪費。

Before running any of these, try to think about the expected trends. How should page-table size change as the address space grows? As the page size grows? Why shouldn’t we just use really big pages in general?

Page table size increases as the address space grows because we need more pages to cover the whole address space. When Page sizes increases the page table size decreases because we need less pages (because they are bigger in size) to cover the whole address space.

We should not use really big pages in general because it would be a lot of waste of memory. Because, most processes use very little memory.


 

第2題:

Question 2

Now let’s do some translations. Start with some small examples, and change the number of pages that are allocated to the address space with the -u flag . For example:

paging-linear-translate.py -P 1k -a 16k -p 32k -v -u 0

paging-linear-translate.py -P 1k -a 16k -p 32k -v -u 25

 

 

paging-linear-translate.py -P 1k -a 16k -p 32k -v -u 50
paging-linear-translate.py -P 1k -a 16k -p 32k -v -u 75

 

paging-linear-translate.py -P 1k -a 16k -p 32k -v -u 100

What happens as you increase the percentage of pages that are allocated in each address space?

地址空間越來越被充分利用,空閑空間越來越少

 As, the percentage of pages that are allocated or usage of address space is increased more and more memory access operations become valid however free space decreases


第3題:

Now let’s try some different random seeds, and some different (and sometimes quite crazy) address-space parameters, for variety:

paging-linear-translate.py -P 8 -a 32
-p 1024 -v -s 1

paging-linear-translate.py -P 8k -a 32k -p 1m
-v -s 2

paging-linear-translate.py -P 1m -a 256m -p 512m -v -s 3

Which of these parameter combinations are unrealistic? Why?

  1. Too Small Size、、

 

第4題:

Question 4

Use the program to try out some other problems. Can you find the limits of where the program doesn’t work anymore? For example, what happens if the address-space size is bigger than physical memory?

It won't work when

  1. page size is greater than address-space.
  2. address space size is greater than the physical memory.
  3. physical memory size is not multiple of page size.
  4. address space is not multiple of page size.
  5. page size is negative.
  6. physical memory is negative.
  7. address space is negative.

 


2019-11-02

17:00:02

第19章:

https://github.com/xxyzz/ostep-hw/tree/master/19

 第21章:

 

 第22章:

 

 

 

 

 

 

 

 

                                                

 

 

 

 

 

 

./paging-policy.py --addresses=0,1,2,3,4,5,0,1,2,3,4,5 --policy=FIFO --cachesize=5 -c

 

./paging-policy.py --addresses=0,1,2,3,4,5,0,1,2,3,4,5 --policy=LRU --cachesize=5 -c

 

 

 可以看到循環工作負載時,LRU出現最差情況

./paging-policy.py --addresses=0,1,2,3,4,5,4,5,4,5,4,5 --policy=MRU --cachesize=5 -c

 

 

 需要的緩存增大1,才能大幅提高性能,並接近OPT。


 

 

 

 

 

 

復制代碼
$ ./generate-trace.py
[3, 0, 6, 6, 6, 6, 7, 0, 6, 6]

$ ./paging-policy.py --addresses=3,0,6,6,6,6,7,0,6,6 --policy=LRU -c
FINALSTATS hits 6   misses 4   hitrate 60.00

$ ./paging-policy.py --addresses=3,0,6,6,6,6,7,0,6,6 --policy=RAND -c
FINALSTATS hits 5   misses 5   hitrate 50.00

$ ./paging-policy.py --addresses=3,0,6,6,6,6,7,0,6,6 --policy=CLOCK -c -b 2
Access: 3  MISS Left  ->          [3] <- Right Replaced:- [Hits:0 Misses:1]
Access: 0  MISS Left  ->       [3, 0] <- Right Replaced:- [Hits:0 Misses:2]
Access: 6  MISS Left  ->    [3, 0, 6] <- Right Replaced:- [Hits:0 Misses:3]
Access: 6  HIT  Left  ->    [3, 0, 6] <- Right Replaced:- [Hits:1 Misses:3]
Access: 6  HIT  Left  ->    [3, 0, 6] <- Right Replaced:- [Hits:2 Misses:3]
Access: 6  HIT  Left  ->    [3, 0, 6] <- Right Replaced:- [Hits:3 Misses:3]
Access: 7  MISS Left  ->    [3, 6, 7] <- Right Replaced:0 [Hits:3 Misses:4]
Access: 0  MISS Left  ->    [3, 7, 0] <- Right Replaced:6 [Hits:3 Misses:5]
Access: 6  MISS Left  ->    [7, 0, 6] <- Right Replaced:3 [Hits:3 Misses:6]
Access: 6  HIT  Left  ->    [7, 0, 6] <- Right Replaced:- [Hits:4 Misses:6]
FINALSTATS hits 4   misses 6   hitrate 40.00

$ ./paging-policy.py --addresses=3,0,6,6,6,6,7,0,6,6 --policy=CLOCK -c -b 0
Access: 3  MISS Left  ->          [3] <- Right Replaced:- [Hits:0 Misses:1]
Access: 0  MISS Left  ->       [3, 0] <- Right Replaced:- [Hits:0 Misses:2]
Access: 6  MISS Left  ->    [3, 0, 6] <- Right Replaced:- [Hits:0 Misses:3]
Access: 6  HIT  Left  ->    [3, 0, 6] <- Right Replaced:- [Hits:1 Misses:3]
Access: 6  HIT  Left  ->    [3, 0, 6] <- Right Replaced:- [Hits:2 Misses:3]
Access: 6  HIT  Left  ->    [3, 0, 6] <- Right Replaced:- [Hits:3 Misses:3]
Access: 7  MISS Left  ->    [3, 0, 7] <- Right Replaced:6 [Hits:3 Misses:4]
Access: 0  HIT  Left  ->    [3, 0, 7] <- Right Replaced:- [Hits:4 Misses:4]
Access: 6  MISS Left  ->    [3, 7, 6] <- Right Replaced:0 [Hits:4 Misses:5]
Access: 6  HIT  Left  ->    [3, 7, 6] <- Right Replaced:- [Hits:5 Misses:5]
FINALSTATS hits 5   misses 5   hitrate 50.00

$ ./paging-policy.py --addresses=3,0,6,6,6,6,7,0,6,6 --policy=CLOCK -c -b 1
Access: 3  MISS Left  ->          [3] <- Right Replaced:- [Hits:0 Misses:1]
Access: 0  MISS Left  ->       [3, 0] <- Right Replaced:- [Hits:0 Misses:2]
Access: 6  MISS Left  ->    [3, 0, 6] <- Right Replaced:- [Hits:0 Misses:3]
Access: 6  HIT  Left  ->    [3, 0, 6] <- Right Replaced:- [Hits:1 Misses:3]
Access: 6  HIT  Left  ->    [3, 0, 6] <- Right Replaced:- [Hits:2 Misses:3]
Access: 6  HIT  Left  ->    [3, 0, 6] <- Right Replaced:- [Hits:3 Misses:3]
Access: 7  MISS Left  ->    [3, 0, 7] <- Right Replaced:6 [Hits:3 Misses:4]
Access: 0  HIT  Left  ->    [3, 0, 7] <- Right Replaced:- [Hits:4 Misses:4]
Access: 6  MISS Left  ->    [3, 7, 6] <- Right Replaced:0 [Hits:4 Misses:5]
Access: 6  HIT  Left  ->    [3, 7, 6] <- Right Replaced:- [Hits:5 Misses:5]
FINALSTATS hits 5   misses 5   hitrate 50.00

$ ./paging-policy.py --addresses=3,0,6,6,6,6,7,0,6,6 --policy=CLOCK -c -b 3
Access: 0  MISS Left  ->       [3, 0] <- Right Replaced:- [Hits:0 Misses:2]
Access: 6  MISS Left  ->    [3, 0, 6] <- Right Replaced:- [Hits:0 Misses:3]
Access: 6  HIT  Left  ->    [3, 0, 6] <- Right Replaced:- [Hits:1 Misses:3]
Access: 6  HIT  Left  ->    [3, 0, 6] <- Right Replaced:- [Hits:2 Misses:3]
Access: 6  HIT  Left  ->    [3, 0, 6] <- Right Replaced:- [Hits:3 Misses:3]
Access: 7  MISS Left  ->    [3, 6, 7] <- Right Replaced:0 [Hits:3 Misses:4]
Access: 0  MISS Left  ->    [6, 7, 0] <- Right Replaced:3 [Hits:3 Misses:5]
Access: 6  HIT  Left  ->    [6, 7, 0] <- Right Replaced:- [Hits:4 Misses:5]
Access: 6  HIT  Left  ->    [6, 7, 0] <- Right Replaced:- [Hits:5 Misses:5]
FINALSTATS hits 5   misses 5   hitrate 50.00
復制代碼

 


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM