說明:我之前在網上看到這篇文章覺得非常好,於是把它翻譯了下來。當然很多地方翻譯的很渣,見笑了。溫馨提示,文章有點長。
原文鏈接:
程序員需要知道的十個操作系統的概念
Do you speak binary? Can you comprehend machine code? If I gave you a sheet full of 1s and 0s could you tell me what it means/does? If you were to go to a country you’ve never been to that speaks a language you’ve never heard, or maybe your heard of it but don’t actually speak it, what would you need while there to help you communicate with the locals?
你知道二進制嗎?你能理解機器語言嗎?假如給你一張全是0或1的表格你能理解是啥意思嗎?假如你要去一個你沒有去過的國家,你要說你從沒聽過的語言,或者你要能聽懂不需要說它,那么需要怎么幫助你才能和當地人交流呢?
You would need a translator. Your operating system functions as that translator in your PC. It converts those 1s and 0s, yes/no, on/off values into a readable language that you will understand. It does all of this in a streamlined graphical user interface, or GUI, that you can move around with a mouse click things, move them, see them happening before your eyes.
你需要一個翻譯,你的操作系統就像你電腦里的翻譯一樣。它將0或1,是或否,開或關轉成你能夠理解的語言。它在一個簡單的圖形化界面中完成這些工作,你可以用鼠標點擊、移動一些東西,這些可以用你的眼睛看到。
Knowing how operating systems work is a fundamental and critical to anyone who is a serious software developer. There should be no attempt to get around it and anyone telling you it’s not necessary should be ignored. While the extend and depth of knowledge can be questioned, knowing more than the fundamentals can be critical to how well your program runs and even its structure and flow.
一個嚴謹的軟件工程師必須要知道操作系統的工作方式。如果有人跟你說那不重要,那么他是騙你的,事實上你不應該忽略它(操作系統的工作方式)。雖然知識的深度和廣度會成為一個問題,但是了解更多的基礎知識能讓你更好的把控程序的運行、流程和結構。
Why? When you write a program and it runs too slow, but you see nothing wrong with your code, where else will you look for a solution. How will you be able to debug the problem if you don’t know how the operating system works? Are you accessing too many files? Running out of memory and swap is in high usage? But you don’t even know what swap is! Or is I/O blocking?
這是為什么呢?當你寫的程序運行的越來越慢,但是代碼里面看不到任何警告,這時候你該如何解決呢。假如你不知道操作系統是如何工作的,要怎么去調試這個問題呢?訪問了過多文件?內存耗盡或者交換區使用過高?但是你甚至不知道什么是交換區(swap)或阻塞IO。
And you want to communicate with another machine. How do you do that locally or over the internet? And what’s the difference? Why do some programmers prefer one OS over another?
你想訪問另一台機器,怎樣在本地或網上操作呢?他們有什么不同嗎?為什么有的程序在某個系統上能運行而其他的系統卻不行呢?(這里感覺不妥)
In an attempt to be a serious developer, I recently took Georgia Tech’s course “Introduction to Operating Systems.” It teaches the basic OS abstractions, mechanisms, and their implementations. The core of the course contains concurrent programming (threads and synchronization), inter-process communication, and an introduction to distributed OSs. I want to use this post to share my takeaways from the course, that is the 10 critical operating system concepts that you need to learn if you want to get good at developing software.
上述鏈接地址:https://cn.udacity.com/course/introduction-to-operating-systems--ud923
為了成為一個嚴謹的程序員,我最近選修了佐治亞理工學院的課程,就是那個鏈接。課程內容有操作系統(這里也感覺不妥),運行機制以及如何實現的。核心內容有並發程序設計(線程同步),進程間通信和分布式系統的介紹。我想要用這篇文章來分享我在這門課程中學到的東西,這就是十個操作系統的關鍵知識,如果你想成為一個優秀的軟件工程師你就需要學習它。
But first, let’s define what an operating system is. An Operating System (OS) is a collection of software that manages computer hardware and provides services for programs. Specifically, it hides hardware complexity, manages computational resources, and provides isolation and protection. Most importantly, it directly has privilege access to the underlying hardware. Major components of an OS are file system, scheduler, and device driver. You probably have used both Desktop (Windows, Mac, Linux) and Embedded (Android, iOS) operating systems before.
首先,我們先定義什么的操作系統:一個操作系統是管理電腦硬件以及為程序提供服務的一個軟件。具體點來說,它隱藏了硬件細節,管理着計算資源和提供隔離和保護。更重要的是,它可以直接訪問底層硬件。操作系統的主要組成部分就是文件系統,調度(進程)以及設備驅動。你可能已經用過一些系統,桌面系統(Windows, Mac, Linux)和嵌入式系統(Android, iOS)。
There are 3 key elements of an operating system, which are: (1) Abstractions(process, thread, file, socket, memory), (2) Mechanisms (create, schedule, open, write, allocate), and (3) Policies (LRU, EDF)
操作系統有三個重要元素:
1)抽象(進程、線程、文件、套接字、內存)
2)機制(創建。調度、打開、寫操作、分配)
3)策略(LRU, EDF)(這兩個我也暫時不懂)
There are 2 operating system design principles, which are: (1) Separation of mechanism and policy by implementing flexible mechanisms to support policies, and (2) Optimize for common case: Where will the OS be used? What will the user want to execute on that machine? What are the workload requirements?
還有兩個操作系統的設計原則:
策略和機制分離:通過設計靈活的機制來支持策略。
2)達到最優的使用效果:操作系統在什么場景中使用、用戶希望在機器中執行什么、負載會有多少。
The 3 types of Operating Systems commonly used nowadays are: (1) Monolithic OS, where the entire OS is working in kernel space and is alone in supervisor mode; (2) Modular OS, in which some part of the system core will be located in independent files called modules that can be added to the system at run time; and (3) Micro OS, where the kernel is broken down into separate processes, known as servers. Some of the servers run in kernel space and some run in user-space.
常用的操作系統有下面三種:
1)單片機系統: 整個操作系統都在內核空間中工作,並且以管理員模式獨自運行着。(這個應該就是簡單的單片機)
2)模塊化系統:系統的核心部分獨立的被稱為模塊的文件中,這些模塊可以在運行中添加。
3)微操作系統:內核被分成單獨的進程,稱為服務。有的服務運行在內核空間,有的運行在用戶空間。(這個就是我們常用的PC了)
1 — Processes and Process Management
A process is basically a program in execution. The execution of a process must progress in a sequential fashion. To put it in simple terms, we write our computer programs in a text file and when we execute this program, it becomes a process which performs all the tasks mentioned in the program.
一、進程和過程管理
進程是一個正在執行的程序。進程必須按照正確的流程執行。簡單地說,我們把程序寫在一個文本文件中,當我們執行這個程序時,它就成為了一個進程。
When a program is loaded into the memory and it becomes a process, it can be divided into four sections ─ stack, heap, text and data. The following image shows a simplified layout of a process inside main memory
當一個程序加載到內存中成為一個進程時,它可以分為四個部分--棧、堆、代碼段和數據段。下面這幅圖顯示了內存結構的簡單布局:
Stack: The process Stack contains the temporary data such as method/function parameters, return address and local variables.
Heap: This is dynamically allocated memory to a process during its run time.
Text: This includes the current activity represented by the value of Program Counter and the contents of the processor’s registers.
Data: This section contains the global and static variables.
棧:進程棧中包含一些臨時數據,比如方法,函數參數,返回值地址和局部變量
堆:在程序運行期間動態分配的
代碼段:這個段包含由程序計數器表示的當前活動狀態和處理器寄存器的值(嗯,這個也不大妥)
數據段:這個段包含全局變量和靜態變量
When a process executes, it passes through different states. These stages may differ in different operating systems, and the names of these states are also not standardized. In general, a process can have one of the following five states at a time:
當一個程序運行時,它會經歷不同的狀態。這些階段在不同的操作系統中可能會有所不同,所以每個狀態的稱呼也沒有標准化。一般來說,一個進程在運行期間會有下面五種狀態。
- Start: This is the initial state when a process is first started/created.
- Ready: The process is waiting to be assigned to a processor. Ready processes are waiting to have the processor allocated to them by the operating system so that they can run. Process may come into this state after Start state or while running it by but interrupted by the scheduler to assign CPU to some other process.
- Running: Once the process has been assigned to a processor by the OS scheduler, the process state is set to running and the processor executes its instructions.
- Waiting: Process moves into the waiting state if it needs to wait for a resource, such as waiting for user input, or waiting for a file to become available.
- Terminated or Exit: Once the process finishes its execution, or it is terminated by the operating system, it is moved to the terminated state where it waits to be removed from main memory.
開始:一個進程第一次運行/創建時初始化的狀態
就緒:進程正在等待被分配給處理器。就緒進程在等待操作系統給它分配處理器,之后才能運行。進程有可能由開始狀態就進入這個狀態,或者在運行期間被打斷,系統將CPU分配給了其他進程。
運行:一旦進程被操作系統調度程序分配到了處理器,進程就會變成運行狀態,處理器會執行它的指令。
等待:假如進程需要等待資源就會進入這個狀態。比如等待輸入、或等待文件可用。
結束/退出:一個進程執行完了或者被操作系統打斷了,就會進入這個狀態,它的資源也會被回收。
A Process Control Block is a data structure maintained by the Operating System for every process. The PCB is identified by an integer process ID (PID). A PCB keeps all the information needed to keep track of a process as listed below:
操作系統為每個進程維護一個進程控制塊數據(PCB)結構,進程控制塊由一個整型數唯一標識,這個數字稱為進程號(PID)。PCB保存了所有的信息用來跟蹤每個進程(見下圖):
Process State: The current state of the process i.e., whether it is ready, running, waiting, or whatever.
Process Privileges: This is required to allow/disallow access to system resources.
Process ID: Unique identification for each of the process in the operating system.
Pointer: A pointer to parent process.
Program Counter: Program Counter is a pointer to the address of the next instruction to be executed for this process.
CPU Registers: Various CPU registers where process need to be stored for execution for running state.
CPU Scheduling Information: Process priority and other scheduling information which is required to schedule the process.
Memory Management Information: This includes the information of page table, memory limits, Segment table depending on memory used by the operating system.
Accounting Information: This includes the amount of CPU used for process execution, time limits, execution ID etc.
IO Status Information: This includes a list of I/O devices allocated to the process.
進程狀態:當前的進程狀態,有可能是就緒、運行、等待或者其他
進程權限:它允許訪問的系統資源
進程號:在操作系統中是唯一的
一個指針:指向父進程的一個指針
程序計數器:指向下一次將要執行指令的地址
寄存器:各種CPU寄存器,它們要存儲進程運行的狀態信息
調度信息:進程優先級和其他進程調度信息
內存管理信息:它包含頁表信息、內存限制、依賴操作系統的內存信息(這個也不靠譜)
統計信息:包含進程執行的CPU數量,時間限制,執行ID等。
IO狀態信息:分配給進程的IO設備的列表
2 — Threads and Concurrency
A thread is a flow of execution through the process code, with its own program counter that keeps track of which instruction to execute next, system registers which hold its current working variables, and a stack which contains the execution history.
二、線程並發
線程是程序代碼執行的一個流程,使用自己的程序計數器來記錄下一條需要執行指令的地址,當前工作變量保存在系統寄存器中,和包含執行歷史的堆棧。
A thread shares with its peer threads few information like code segment, data segment and open files. When one thread alters a code segment memory item, all other threads see that.
線程和其他同級線程共享一些信息,比如代碼段、數據段和打開的文件。當一個線程修改了代碼段的內存的某一項,其他線程都可以看見。
A thread is also called a lightweight process. Threads provide a way to improve application performance through parallelism. Threads represent a software approach to improving performance of operating system by reducing the overhead thread is equivalent to a classical process.
線程也可以稱為輕量級進程。線程提供了通過並行提高程序效率的方法。通過線程開銷來達到提高操作系統性能的目的,這是一個經典的過程。(這里也不大妥)
Each thread belongs to exactly one process and no thread can exist outside a process. Each thread represents a separate flow of control. Threads have been successfully used in implementing network servers and web server. They also provide a suitable foundation for parallel execution of applications on shared memory multiprocessors.
每個線程都依賴與進程中,沒有線程可以存在與進程之外。每個線程代表一個單獨的控制流。線程成功的應用在網絡服務器和web服務器。它們為多核處理器上並行執行程序提供了合適的基礎。
Advantages of Thread:
- Threads minimize the context switching time.
- Use of threads provides concurrency within a process.
- Efficient communication.
- It is more economical to create and context switch threads.
- Threads allow utilization of multiprocessor architectures to a greater scale and efficiency.
線程的優勢:
線程上下文切換時間很短
使用線程可以實現進程內的並發
線程間通信簡單
創建線程,上下文切換資源消耗低
在多核處理器上有非常高的效率
Threads are implemented in the following 2 ways:
- User Level Threads: User managed threads.
- Kernel Level Threads: Operating System managed threads acting on kernel, an operating system core.
線程有兩種:
用戶線程:用戶管理的線程
內核線程:操作系統管理的線程,運行在內核態,是一個操作系統的核心。
User Level Threads
In this case, the thread management kernel is not aware of the existence of threads. The thread library contains code for creating and destroying threads, for passing message and data between threads, for scheduling thread execution and for saving and restoring thread contexts. The application starts with a single thread.
用戶線程:
在這種情況下,線程管理內核不知道線程的存在。線程庫主要功能有創建/銷毀線程,線程間傳遞數據,線程調度的保護現場和恢復現場。程序最開始時是單線程的。
(補充:用戶線程指不需要內核支持而在用戶程序中實現的線程,其不依賴於操作系統核心,應用進程利用線程庫管理的線程)
Advantages:
- Thread switching does not require Kernel mode privileges.
- User level thread can run on any operating system.
- Scheduling can be application specific in the user level thread.
- User level threads are fast to create and manage.
優勢:
線程切換不需要想內核申請權限
用戶級線程可以在任何操作系統中運行
// 這個要怎么翻譯
用戶級線程可以快速創建和管理
Disadvantages:
- In a typical operating system, most system calls are blocking.
- Multithreaded application cannot take advantage of multiprocessing.
劣勢:
在典型的操作系統中,很多系統調用都是阻塞的
多線程程序不能用多進程處理
Kernel Level Threads
In this case, thread management is done by the Kernel. There is no thread management code in the application area. Kernel threads are supported directly by the operating system. Any application can be programmed to be multithreaded. All of the threads within an application are supported within a single process.
內核級線程:
線程管理由內核完成,應用程序區域沒有線程管理的代碼。操作系統能非常好的支持內核線程。任何應用程序都可以多線程的,應用程序中的所有線程都支持單個進程。(這一段不妥)
The Kernel maintains context information for the process as a whole and for individuals threads within the process. Scheduling by the Kernel is done on a thread basis. The Kernel performs thread creation, scheduling and management in Kernel space. Kernel threads are generally slower to create and manage than the user threads.
內核為每個進程維護上下文信息,同時也為每個線程維護上下文信息。內核的調度是在線程的基礎上完成的。內核在內核空間中創建線程、調度和管理。內核線程的創建和管理一般比用戶級線程更慢。
Advantages
- Kernel can simultaneously schedule multiple threads from the same process on multiple processes.
- If one thread in a process is blocked, the Kernel can schedule another thread of the same process.
- Kernel routines themselves can be multithreaded.
優勢:
內核可以在多個進程中調用同一個進程中的不同線程。
假如有進程中有一個線程被阻塞了,內核可以調度這個進程中的其他線程。
內核例程本身可以是多線程的
Disadvantages
- Kernel threads are generally slower to create and manage than the user threads.
- Transfer of control from one thread to another within the same process requires a mode switch to the Kernel.
劣勢:
內核線程的創建和管理一般比用戶級線程更慢。
線程之間的切換需要切換到內核態。
3 — Scheduling
The process scheduling is the activity of the process manager that handles the removal of the running process from the CPU and the selection of another process on the basis of a particular strategy.
進程管理器來管理着進程調度,它可以暫停正在運行的進程,再根據特定條件去執行另外的進程。
Process scheduling is an essential part of a Multiprogramming operating systems. Such operating systems allow more than one process to be loaded into the executable memory at a time and the loaded process shares the CPU using time multiplexing.
程序調度是多進程系統中非常重要的組成部分,操作系統允許同一時間將多個進程加載內存中去,加載的進程利用多路復用來共享CPU。
The OS maintains all Process Control Blocks (PCBs) in Process Scheduling Queues. The OS maintains a separate queue for each of the process states and PCBs of all processes in the same execution state are placed in the same queue. When the state of a process is changed, its PCB is unlinked from its current queue and moved to its new state queue.
操作系統在進程調度隊列中維護了所有的進程控制塊。操作系統為每個進程狀態維護一個單獨的隊列,將處於相同執行狀態的所有的進程控制塊放在同一個隊列中。當有的進程狀態發生變化,就在原有隊列中將它移除並加入到新的狀態隊列。
The Operating System maintains the following important process scheduling queues:
- Job queue − This queue keeps all the processes in the system.
- Ready queue − This queue keeps a set of all processes residing in main memory, ready and waiting to execute. A new process is always put in this queue.
- Device queues − The processes which are blocked due to unavailability of an I/O device constitute this queue.
操作系統維護着下面幾個重要的進程調度隊列:
工作隊列:這個隊列包含系統所有的進程。
就緒隊列:這個隊列中保存着加載進內存的進程隊列,它們等待着被執行。(這個好像不太妥)一個新的進程總是先加在這個隊列。
設備隊列:由於I/O設備的不可用而被阻塞的進程組成了這個隊列。
The OS can use different policies to manage each queue (FIFO, Round Robin, Priority, etc.). The OS scheduler determines how to move processes between the ready and run queues which can only have one entry per processor core on the system; in the above diagram, it has been merged with the CPU.
操作系統使用不同的策略來管理每個隊列(先進先出、循環、優先級等),系統調度器管理進程如何在就緒隊列和運行隊中進行切換,在系統中每個處理器里面只能有一個條目(就緒隊列中的一個進程),就像上面的圖一樣,它會合並到CPU中(這個怎么翻譯)。
Two-state process model refers to running and non-running states:
- Running: When a new process is created, it enters into the system as in the running state.
- Not Running: Processes that are not running are kept in queue, waiting for their turn to execute. Each entry in the queue is a pointer to a particular process. Queue is implemented by using linked list. Use of dispatcher is as follows. When a process is interrupted, that process is transferred in the waiting queue. If the process has completed or aborted, the process is discarded. In either case, the dispatcher then selects a process from the queue to execute.
進程的兩種狀態,運行和非運行:
運行:當一個新的進程被撞見,它就會以運行狀態進入系統。
非運行:不運行的進程也被放在一個隊列中,等待着被執行。隊列中的每一項都指向一個特定的進程。隊列通過鏈表實現的。調度的流程大概是這樣:當進程被打斷了,它就加入到等待隊列中;如果進程結束或被終止了,就被調度器丟棄了。不管怎樣,調度器都從隊列中選擇進程來執行。
A context switch is the mechanism to store and restore the state or context of a CPU in Process Control block so that a process execution can be resumed from the same point at a later time. Using this technique, a context switcher enables multiple processes to share a single CPU. Context switching is an essential part of a multitasking operating system features.
上下文切換是在CPU中利用進程控制塊保護現場和恢復現場的一種機制,以便在某些時候可以恢復現場讓進程在同樣的位置執行。使用這個技術,上下文切換允許多個進程共享一個處理器。上下文切換是多任務操作系統中非常重要的組成部分。
When the scheduler switches the CPU from executing one process to execute another, the state from the current running process is stored into the process control block. After this, the state for the process to run next is loaded from its own PCB and used to set the PC, registers, etc. At that point, the second process can start executing.
當調度器進程將CPU執行一個進程切換到執行另外一個進程時,當前進程的狀態信息會被保存在進程控制塊中,然后運行另外的進程並從進程控制塊中加載它的信息,之后第二個進程就開始執行了。
Context switches are computationally intensive since register and memory state must be saved and restored. To avoid the amount of context switching time, some hardware systems employ two or more sets of processor registers. When the process is switched, the following information is stored for later use: Program Counter, Scheduling Information, Base and Limit Register Value, Currently Used Register, Changed State, I/O State Information, and Accounting Information.
因為需要保存和恢復寄存器和內存,所以上下文切換是計算密集型的。為了縮短上下文切換的時間,一些硬件系統提供了兩個或者更多的處理器。當進程切換時要保存下面的信息:程序計數器(下個指令的地址),調度器信息,寄存器的值,當前使用的寄存器,更改的狀態,IO狀態信息和統計信息。
4 — Memory Management
Memory management is the functionality of an operating system which handles or manages primary memory and moves processes back and forth between main memory and disk during execution. Memory management keeps track of each and every memory location, regardless of either it is allocated to some process or it is free. It checks how much memory is to be allocated to processes. It decides which process will get memory at what time. It tracks whenever some memory gets freed or unallocated and correspondingly it updates the status.
- 內存管理
內存管理是操作系統的基礎功能,它管理着主內存,執行期間在內存和硬盤之間來回移動進程。內存管理跟蹤這個每一塊內存,不管內存是被動態分配的還是自由的。(這個free,要翻譯成不是動態分配的嗎)。它要檢查為進程分配了多少內存。它也決定着什么時候把內存分給進程。它也跟蹤那些未分配的內存並且會更新他們的狀態。
The process address space is the set of logical addresses that a process references in its code. For example, when 32-bit addressing is in use, addresses can range from 0 to 0x7fffffff; that is, 2³¹ possible numbers, for a total theoretical size of 2 gigabytes.
進程地址空間是進程在代碼中引用的一組邏輯地址(不是真正的物理地址)。假如你用的是32位的系統,地址范圍就是0 —— 0x7FFFFFFF。也就是231,總的就是2G。(這里應該是指的用戶空間)
The operating system takes care of mapping the logical addresses to physical addresses at the time of memory allocation to the program. There are three types of addresses used in a program before and after memory is allocated:
- Symbolic addresses: The addresses used in a source code. The variable names, constants, and instruction labels are the basic elements of the symbolic address space.
- Relative addresses: At the time of compilation, a compiler converts symbolic addresses into relative addresses.
- Physical addresses: The loader generates these addresses at the time when a program is loaded into main memory.
操作系統負責在程序運行時將邏輯地址映射到物理地址,在內存分配前后總共有三種類型的地址:
符號地址:源碼中使用的地址。變量名,常量和指令標簽是符號地址空間中的基本元素。(這個指令標簽是指啥,函數名之類的嗎)
相對地址:編譯期間,編譯器將符號地址轉換成相對地址。
物理地址:加載程序在程序加載到主內存時會生成這類地址。(這里感覺不太靠譜,CPU地址總線傳來的地址,由硬件電路控制)
Virtual and physical addresses are the same in compile-time and load-time address-binding schemes. Virtual and physical addresses differ in execution-time address-binding scheme.
The set of all logical addresses generated by a program is referred to as a logical address space. The set of all physical addresses corresponding to these logical addresses is referred to as a physical address space.
編譯加載時地址綁定方案中的虛擬地址和物理地址相同。在執行時地址綁定方案中虛擬地址和物理地址相同上有所不同。
由程序生成的所有邏輯地址的集合稱為邏輯地址空間。 與這些邏輯地址相對應的所有物理地址的集合被稱為物理地址空間。
5 — Inter-Process Communication
A process can be of 2 types: Independent process and Co-operating process. An independent process is not affected by the execution of other processes while a co-operating process can be affected by other executing processes. Though one can think that those processes, which are running independently, will execute very efficiently but in practical, there are many situations when co-operative nature can be utilized for increasing computational speed, convenience and modularity. Inter-process communication (IPC) is a mechanism which allows processes to communicate each other and synchronize their actions. The communication between these processes can be seen as a method of co-operation between them. Processes can communicate with each other using these two ways: Shared Memory and Message Parsing.
五——進程間通信
一個進程有兩種類型:獨立的進程和協作的進程。一個獨立的進程執行過程不受其他進程的影響,但是一個協作的進程執行過程會受其他進程的影響。雖然人們會認為那些獨立的進程會高效的執行,實際上有很多情況可以通過相互協作來達到增加運算速度,方便快捷,和模塊化的功效。
進程間通信是一種允許進程間相互通信和同步的機制,可以看成是進程間的一種合作的方式。進程間用心有下面兩種通信方式:共享內存和消息解析(我覺得這里其實把消息隊列,socket等都划為這類了)
Shared Memory Method
There are two processes: Producer and Consumer. Producer produces some item and Consumer consumes that item. The two processes shares a common space or memory location known as buffer where the item produced by Producer is stored and from where the Consumer consumes the item if needed. There are two version of this problem: first one is known as unbounded buffer problem in which Producer can keep on producing items and there is no limit on size of buffer, the second one is known as bounded buffer problem in which producer can produce up to a certain amount of item and after that it starts waiting for consumer to consume it.
共享內存:
假如有兩個進程,生產者和消費者。生產者生產一些產品,消費者消費產品。這兩個進程共享一個公共的緩沖區或本地空間,生產者往那個緩沖區存儲數據(生產產品),消費者從那里消費產品。這個問題有兩個版本,1:無緩沖邊界問題:生產者可以一直生產產品,緩沖區沒有上限。2:有緩沖區邊界問題:生產者最多可以生產出一定數量的產品,然后等消費者消費掉才能繼續生產。
In the bounded buffer problem: First, the Producer and the Consumer will share some common memory, then producer will start producing items. If the total produced item is equal to the size of buffer, producer will wait to get it consumed by the Consumer. Similarly, the consumer first check for the availability of the item and if no item is available, Consumer will wait for producer to produce it. If there are items available, consumer will consume it.
在有邊界的問題中:生產者和消費者共享一部分內存。生產者開始生產產品,假如產品總數等於緩沖區容量,生產者就等待消費者從倉庫里面拿走產品。類似的,消費者首先要檢查倉庫里面有沒有產品,假如沒有產品,消費者就要等待生產者生產出來。假如有產品,就能立即消費它。
(我覺得這里介紹的是生產者消費者模型,跟我們平常理解的共享內存不太一樣,指Linux下的shmget、windows下的CreateFileMapping)
Message Parsing Method
In this method, processes communicate with each other without using any kind of of shared memory. If two processes p1 and p2 want to communicate with each other, they proceed as follow:
- Establish a communication link (if a link already exists, no need to establish it again.)
- Start exchanging messages using basic primitives. We need at least two primitives: send(message, destination) or send(message) and receive(message, host) or receive(message)
消息解析:
在這種方法中進程間通信不使用任何共享內存,假如進程p1和p2想要通信,它們將按照下面流程進行:
1)建立通信連接(已經連接上了就不再需要重復連接)
2)開始用基本原語交換數據,這里至少需要兩種原語:發送(消息,目標地址)或發送(消息)、接收(消息,來源)或接收(消息)
(補充:原語,是執行過程中不可被打斷的基本操作,你可以理解為一段代碼,這段代碼在執行過程中不能被打斷)
The message size can be of fixed size or of variable size. if it is of fixed size, it is easy for OS designer but complicated for programmer and if it is of variable size then it is easy for programmer but complicated for the OS designer. A standard message can have two parts: header and body.
消息大小是可變的也可以是固定的。如果它是固定大小的,操作系統設計者很容易,但是對於程序員來說很復雜,如果它是可變大小的,那么對於程序員來說很容易,但是操作系統的設計者卻很復雜。一個標准的消息要包含兩個部分:消息頭和消息體。
The header part is used for storing Message type, destination id, source id, message length and control information. The control information contains information like what to do if runs out of buffer space, sequence number, priority. Generally, message is sent using FIFO style.
消息頭存儲着消息的類型,目標id,源id,消息長度和控制信息。控制信息通常包含緩沖區大小、序號優先級等信息。通常來說、消息發送采用FIFO類型。
(我感覺這個其實是指socket通信方式)
6 — I/O Management
One of the important jobs of an Operating System is to manage various I/O devices including mouse, keyboards, touch pad, disk drives, display adapters, USB devices, Bit-mapped screen, LED, Analog-to-digital converter, On/off switch, network connections, audio I/O, printers etc.
IO管理
操作系統的重要工作之一是管理各種I/O設備,包括鼠標、鍵盤、觸摸板、磁盤驅動器、顯示適配器、USB設備、位圖屏幕、LED、模數轉換器、開關、網絡連接、音頻I/O、打印機等。
An I/O system is required to take an application I/O request and send it to the physical device, then take whatever response comes back from the device and send it to the application. I/O devices can be divided into two categories:
- Block devices — A block device is one with which the driver communicates by sending entire blocks of data. For example, hard disks, USB cameras, Disk-On-Key etc.
- Character Devices — A character device is one with which the driver communicates by sending and receiving single characters (bytes, octets). For example, serial ports, parallel ports, sounds cards etc.
I/O系統需要接收應用程序I/O請求並將其發送到物理設備,然后將設備返回的響應發送到應用程序。I/O設備可以分為兩類:
塊設備:塊設備是驅動程序通過發送整個數據塊進行通信的設備。例如硬盤,USB攝像頭,磁盤鑰匙等等
字符設備:字符設備是驅動程序通過發送和接收單個字符(字節)進行通信的設備。例如,串口、並聯端口、聲卡等。
The CPU must have a way to pass information to and from an I/O device. There are three approaches available to communicate with the CPU and Device.
CPU必須有方法將信息傳遞給I/O設備。有三種方法可以與CPU和設備進行通信。
1> Special Instruction I/O
This uses CPU instructions that are specifically made for controlling I/O devices. These instructions typically allow data to be sent to an I/O device or read from an I/O device.
特殊指令IO:
它使用專門用於控制I / O設備的CPU指令。 這些指令通常允許將數據發送到I / O設備或從I / O設備讀取。
2> Memory-mapped I/O
When using memory-mapped I/O, the same address space is shared by memory and I/O devices. The device is connected directly to certain main memory locations so that I/O device can transfer block of data to/from memory without going through CPU.
內存映射IO:
使用內存映射I / O時,內存和I / O設備共享相同的地址空間。 該設備直接連接到某些主存儲器位置,因此I / O設備可以在不通過CPU的情況下將數據塊傳輸到存儲器或從存儲器傳輸數據塊。
While using memory mapped IO, OS allocates buffer in memory and informs I/O device to use that buffer to send data to the CPU. I/O device operates asynchronously with CPU, interrupts CPU when finished.
在使用內存映射IO時,OS會在內存中分配緩沖區,並通知I / O設備使用該緩沖區將數據發送到CPU。 I / O設備與CPU異步操作,完成后中斷CPU。
The advantage to this method is that every instruction which can access memory can be used to manipulate an I/O device. Memory mapped IO is used for most high-speed I/O devices like disks, communication interfaces.
這種方法的優點是每個可以訪問存儲器的指令都可以用來操作I / O設備。 內存映射IO用於大多數高速I / O設備,如磁盤,通信接口。
3> Direct memory access (DMA)
Slow devices like keyboards will generate an interrupt to the main CPU after each byte is transferred. If a fast device such as a disk generated an interrupt for each byte, the operating system would spend most of its time handling these interrupts. So a typical computer uses direct memory access (DMA) hardware to reduce this overhead.
直接內存訪問:
一些設備如鍵盤等慢速設備在傳輸一個字節后會對主CPU產生中斷。如果諸如磁盤的快速設備為每個字節生成中斷,則操作系統將花費大部分時間來處理這些中斷。 因此,典型的計算機使用直接內存訪問(DMA)硬件來減少這種開銷。
Direct Memory Access (DMA) means CPU grants I/O module authority to read from or write to memory without involvement. DMA module itself controls exchange of data between main memory and the I/O device. CPU is only involved at the beginning and end of the transfer and interrupted only after entire block has been transferred.
直接內存訪問(DMA)意味着CPU允許I / O模塊直接讀寫內存。DMA模塊本身控制主存儲器和I / O設備之間的數據交換。CPU僅在傳輸的開始和結束時參與,並且僅在傳輸完整個塊后才中斷。
Direct Memory Access needs a special hardware called DMA controller (DMAC) that manages the data transfers and arbitrates access to the system bus. The controllers are programmed with source and destination pointers (where to read/write the data), counters to track the number of transferred bytes, and settings, which includes I/O and memory types, interrupts and states for the CPU cycles.
直接內存訪問需要一個稱為DMA控制器(DMAC)的特殊硬件,它管理數據傳輸並決定對系統總線的訪問。控制器使用源和目標指針(在何處讀取/寫入數據),計數器來跟蹤傳輸的字節數和設置,包括I / O和存儲器類型,中斷和CPU周期狀態。
// +++++++++++++++++++++++++++
1) 字符設備:提供連續的數據流,應用程序可以順序讀取,通常不支持隨機存取。相反,此類設備支持按字節/字符來讀寫數據。舉例來說,調制解調器是典型的字符設備。
(2) 塊設備:應用程序可以隨機訪問設備數據,程序可自行確定讀取數據的位置。硬盤是典型的塊設備,應用程序可以尋址磁盤上的任何位置,並由此讀取數據。此外,數據的讀寫只能以塊(通常是512B)的倍數進行。與字符設備不同,塊設備並不支持基於字符的尋址。
區別:
1.字符設備只能以字節為最小單位訪問,而塊設備以塊為單位訪問,例如512字節,1024字節等
2.塊設備可以隨機訪問,但是字符設備不可以
3.字符和塊沒有訪問量大小的限制,塊也可以以字節為單位來訪問
// -----------------------------------------
7 — Virtualization
Virtualization is technology that allows you to create multiple simulated environments or dedicated resources from a single, physical hardware system. Software called a hypervisor connects directly to that hardware and allows you to split 1 system into separate, distinct, and secure environments known as virtual machines (VMs). These VMs rely on the hypervisor’s ability to separate the machine’s resources from the hardware and distribute them appropriately.
虛擬化技術允許你從一個硬件系統上配置多個環境來模擬特定的環境。有個管理軟件直接安裝在機器中,它允許將系統分割成多個獨立的,不同的、安全的環境,這些環境被稱為虛擬機(VMs)。這些虛擬機依賴於hypervisor’s的能力,並且可以在同一硬件資源中分發它們。
The original, physical machine equipped with the hypervisor is called the host, while the many VMs that use its resources are called guests. These guests treat computing resources — like CPU, memory, and storage — as a hangar of resources that can easily be relocated. Operators can control virtual instances of CPU, memory, storage, and other resources, so guests receive the resources they need when they need them.
最原始的物理機器被稱為主機,其他使用資源的虛擬機被稱為客戶機。這些客戶機可以輕松的使用一些資源,比如CPU、內存、存儲器等。操作系統管理着CPU、內存、存儲器等資源。當客戶機需要的時候再分配給它們。
Ideally, all related VMs are managed through a single web-based virtualization management console, which speeds things up. Virtualization lets you dictate how much processing power, storage, and memory to give VMs, and environments are better protected since VMs are separated from their supporting hardware and each other. Simply put, virtualization creates the environments and resources you need from underused hardware.
理想的情況下,應該有一個基於web的控制台去管理這所有的虛擬機,這樣有助於管理。虛擬化允許你為虛擬機分配指定的處理能力、存儲空間和內存等,並且可以很好的保護它們,因為它們的跟硬件分離的。簡單地說,虛擬化可以從未充分使用的硬件中創建出用戶所需的環境和資源。
Types of Virtualization:
- Data Virtualization: Data that’s spread all over can be consolidated into a single source. Data virtualization allows companies to treat data as a dynamic supply — providing processing capabilities that can bring together data from multiple sources, easily accommodate new data sources, and transform data according to user needs. Data virtualization tools sits in front of multiple data sources and allows them to be treated as single source, delivering the needed data — in the required form — at the right time to any application or user.
數據虛擬化:分布在各個地方的數據可以合並成一個數據。數據虛擬化公司可以為數據提供動態支持——提供處理能力,將多個數據整合成一個資源,輕松的適應新的數據資源,並根據用戶需求轉換數據。數據虛擬化工具在處理多個數據時將它們看成單一的數據,並在合適的時候根據需求交付給用戶。
- Desktop Virtualization: Easily confused with operating system virtualization — which allows you to deploy multiple operating systems on a single machine — desktop virtualization allows a central administrator (or automated administration tool) to deploy simulated desktop environments to hundreds of physical machines at once. Unlike traditional desktop environments that are physically installed, configured, and updated on each machine, desktop virtualization allows admins to perform mass configurations, updates, and security checks on all virtual desktops.
桌面虛擬化:操作系統虛擬化允許你在一台機器上部署多個操作系統,而桌面虛擬化允許中央管理員(或自動管理工具)同時將模擬桌面環境部署到數百台物理機器上,這兩者很容易混淆。與物理安裝、配置和在每台機器上更新的傳統桌面環境不同,桌面虛擬化允許管理員在所有虛擬桌面上執行大量配置、更新和安全檢查。
- Server Virtualization: Servers are computers designed to process a high volume of specific tasks really well so other computers — like laptops and desktops — can do a variety of other tasks. Virtualizing a server lets it to do more of those specific functions and involves partitioning it so that the components can be used to serve multiple functions.
服務器虛擬化:服務器是設計用來很好地處理大量特定任務的計算機,所以像筆記本電腦和台式機這樣的其他計算機可以完成各種其他任務。虛擬化服務器使它可以執行更多這些特定的功能,並涉及對其進行分區,以便可以使用組件來服務多個功能。
- Operating System Virtualization: Operating system virtualization happens at the kernel — the central task managers of operating systems. It’s a useful way to run Linux and Windows environments side-by-side. Enterprises can also push virtual operating systems to computers, which: (1) Reduces bulk hardware costs, since the computers don’t require such high out-of-the-box capabilities, (2) Increases security, since all virtual instances can be monitored and isolated, and (3) Limits time spent on IT services like software updates.
操作系統虛擬化:操作系統的虛擬化發生在內核層(操作系統的核心管理任務),這是並行運行Linux系統或Windows系統的一種有效的方式。企業可以將虛擬計算系統推送到計算機上,他有下面幾個特點:1)降低硬件資源,因為計算機不需要這么高的開箱能力(這個開箱能力不大合適)。2)提高安全性,所有的虛擬機都可以被監視和隔離。3)限制了在IT服務上花費的時間,比如軟件更新。
- Network Functions Virtualization: Network functions virtualization (NFV) separates a network’s key functions (like directory services, file sharing, and IP configuration) so they can be distributed among environments. Once software functions are independent of the physical machines they once lived on, specific functions can be packaged together into a new network and assigned to an environment. Virtualizing networks reduces the number of physical components — like switches, routers, servers, cables, and hubs — that are needed to create multiple, independent networks, and it’s particularly popular in the telecommunications industry.
網絡功能虛擬化:將網絡的關鍵功能(比如目錄服務、文件共享、IP配置等)分離出來,以便於在做環境中部署。一旦網絡功能獨立與物理機器,特定發功能就可以打包到一起在新的網絡環境中使用了。旬計划網絡技術減少了很多硬件組件,比如交換機,路由器,電纜和集線器。在電信行業中非常受歡迎。
8 — Distributed File Systems
A distributed file system is a client/server-based application that allows clients to access and process data stored on the server as if it were on their own computer. When a user accesses a file on the server, the server sends the user a copy of the file, which is cached on the user’s computer while the data is being processed and is then returned to the server.
分布式文件系統是一個基於客戶機/服務器的應用程序,它允許客戶端訪問和處理存儲在服務器上的數據,就像在自己的計算機上一樣。當用戶訪問服務器上的文件時,服務器向用戶發送文件副本,該文件在處理數據時緩存在用戶的計算機上,然后返回給服務器。
Ideally, a distributed file system organizes file and directory services of individual servers into a global directory in such a way that remote data access is not location-specific but is identical from any client. All files are accessible to all users of the global file system and organization is hierarchical and directory-based.
理想情況下,分布式文件系統將單個服務器的文件和目錄服務組織到一個全局目錄中,以便遠程數據訪問不是特定目錄的,而是與任何客戶機相同。所有用戶都可以訪問全局文件系統的文件,組織是分層的和基於目錄的。
Since more than one client may access the same data simultaneously, the server must have a mechanism in place (such as maintaining information about the times of access) to organize updates so that the client always receives the most current version of data and that data conflicts do not arise. Distributed file systems typically use file or database replication (distributing copies of data on multiple servers) to protect against data access failures.
Sun Microsystems’ Network File System (NFS), Novell NetWare, Microsoft’s Distributed File System, and IBM’s DFS are some examples of distributed file systems.
由於很多客戶端可以同時訪問數據,所以服務器必須有一種機制來組織管理,讓客戶端總是接收到當前最新的版本並且不會沖突。分布式文件系統必須使用文件或數據庫復制(在多個服務器上分發文件副本)來保護訪問數據不會失敗。
下面是一些分布式文件系統的例子:太陽公司的網絡文件系統(NFS)、Novell NetWare、微軟的分布式文件系統、IBM的DFS。
9 — Distributed Shared Memory
Distributed Shared Memory (DSM) is a resource management component of a distributed operating system that implements the shared memory model in distributed systems, which have no physically shared memory. The shared memory provides a virtual address space that is shared among all computers in a distributed system.
分布式共享內存(DSM):
DSM是分布式操作系統的一個資源管理組件,它實現了分布式系統中的共享內存模型,它是沒有物理共享內存的。共享內存在分布式系統中為所有系統提供了虛擬地址空間。
In DSM, data is accessed from a shared space similar to the way that virtual memory is accessed. Data moves between secondary and main memory, as well as, between the distributed main memories of different nodes. Ownership of pages in memory starts out in some pre-defined state but changes during the course of normal operation. Ownership changes take place when data moves from one node to another due to an access by a particular process.
在DSM中,數據從共享空間訪問,就像訪問虛擬內存。數據在次要內存和主內存之間移動,以及在不同節點的分布式主內存之間移動。內存中頁面的所有權在某些預定義狀態下開始,但在正常操作過程中發生變化。當數據由於特定進程的訪問而從一個節點移動到另一個節點時,所有權發生更改。
Advantages of Distributed Shared Memory:
- Hide data movement and provide a simpler abstraction for sharing data. Programmers don’t need to worry about memory transfers between machines like when using the message passing model.
- Allows the passing of complex structures by reference, simplifying algorithm development for distributed applications.
- Takes advantage of “locality of reference” by moving the entire page containing the data referenced rather than just the piece of data.
- Cheaper to build than multiprocessor systems. Ideas can be implemented using normal hardware and do not require anything complex to connect the shared memory to the processors.
- Larger memory sizes are available to programs, by combining all physical memory of all nodes. This large memory will not incur disk latency due to swapping like in traditional distributed systems.
- Unlimited number of nodes can be used. Unlike multiprocessor systems where main memory is accessed via a common bus, thus limiting the size of the multiprocessor system.
- Programs written for shared memory multiprocessors can be run on DSM systems.
分布式共享內存的優點:
隱藏數據移動並為共享數據提供更簡單的抽象。程序員不需要像使用消息傳遞模型那樣擔心機器之間的內存傳輸。
允許通過引用傳遞復雜結構,簡化了分布式應用程序的算法開發。
利用“引用的局部性”,移動包含引用數據的整個頁面,而不僅僅是數據片段。
構建起來比多處理器系統成本低。用普通的硬件就能實現,不需要任何復雜的東西將共享內存連接到處理器。
通過將所有節點的物理內存相結合,可以為程序提供更大的內存大小。這種大內存不會像傳統的分布式系統那樣由於交換而導致磁盤延遲。
可以使用無限數量的節點。不同於通過公共總線訪問主內存的多處理器系統,因此限制了多處理器系統的大小。
為共享內存多處理器編寫的程序可以在DSM系統上運行。
There are two different ways that nodes can be informed of who owns what page: invalidation and broadcast. Invalidation is a method that invalidates a page when some process asks for write access to that page and becomes its new owner. This way the next time some other process tries to read or write to a copy of the page it thought it had, the page will not be available and the process will have to re-request access to that page. Broadcasting will automatically update all copies of a memory page when a process writes to it. This is also called write-update. This method is a lot less efficient more difficult to implement because a new value has to sent instead of an invalidation message.
節點有兩種方式可以知道頁面的所有權:失效和廣播。
失效:失效是當某個進程請求對該頁進行寫訪問並成為該頁的新所有者時,使該頁失效的方法。這樣,下次當其他進程試圖讀或寫頁面的副本時,頁面將不可用,進程將不得不重新請求對該頁面的訪問。
廣播:當進程寫入內存頁時,廣播將自動更新內存頁的所有副本。這也稱為寫更新。這個方法的效率要低得多,實現起來也更困難,因為必須發送一個新的值而不是一個無效消息。
10 — Cloud Computing
More and more, we are seeing technology moving to the cloud. It’s not just a fad — the shift from traditional software models to the Internet has steadily gained momentum over the last 10 years. Looking ahead, the next decade of cloud computing promises new ways to collaborate everywhere, through mobile devices.
雲計算
現在我們看見越來越多的技術遷移到了雲上——這並不是一時的熱情。過去十年,傳統軟件模式慢慢的向互聯網靠近。展望未來,雲計算將在任何地方任何移動設備上被廣泛應用。
So what is cloud computing? Essentially, cloud computing is a kind of outsourcing of computer programs. Using cloud computing, users are able to access software and applications from wherever they need, while it is being hosted by an outside party — in “the cloud.” This means that they do not have to worry about things such as storage and power, they can simply enjoy the end result.
那么,什么是雲計算呢?從本質上講,雲計算是一種計算機程序的外包。使用雲計算,用戶可以從任何需要的地方訪問軟件和應用程序,而它是由雲中的外部方托管的。這意味着他們不必擔心存儲和能力等問題,他們只需要享受最終的結果。
Traditional business applications have always been very complicated and expensive. The amount and variety of hardware and software required to run them are daunting. You need a whole team of experts to install, configure, test, run, secure, and update them. When you multiply this effort across dozens or hundreds of apps, it isn’t easy to see why the biggest companies with the best IT departments aren’t getting the apps they need. Small and mid-sized businesses don’t stand a chance.
傳統商業軟件總是非常繁瑣和昂貴的。運行它們需要的大量的軟件和硬件支持,這是非常嚇人的。你需要完整的團隊專業的人才來使用它們(安裝、配置、測試、運行和更新)。一旦你需要運行幾十個甚至幾百個軟件時,你就不難理解為啥那種很屌的公司都沒有得到他們需要的應用程序。中小型的公司更加沒有機會。(這一段要怎么才合理點)
With cloud computing, you eliminate those headaches that come with storing your own data, because you’re not managing hardware and software — that becomes the responsibility of an experienced vendor like Salesforce and AWS. The shared infrastructure means it works like a utility: you only pay for what you need, upgrades are automatic, and scaling up or down is easy.
使用雲計算,你可以省卻自己存儲數據的麻煩,因為你不用管理軟件和硬件——這是雲計算供應商的責任,比如Salesforce 和AWS。共享基礎架構意味着它就像一個實用程序:你只需支付所需的費用,升級是自動的,並且可以輕松擴展或縮小。
Cloud-based apps can be up and running in days or weeks, and they cost less. With a cloud app, you just open a browser, log in, customize the app, and start using it. Businesses are running all kinds of apps in the cloud, like customer relationship management (CRM), HR, accounting, and much more.
基於雲計算的軟件可以以低成本狀態運行幾天或幾周。使用雲應用程序時,你只需要打開瀏覽器,登錄賬號,自定義應用程序之后就可以開始使用了。很多企業都在雲中運行各種程序,比如用戶管理系統,人事管理軟件等其他的。
As cloud computing grows in popularity, thousands of companies are simply rebranding their non-cloud products and services as “cloud computing.” Always dig deeper when evaluating cloud offerings and keep in mind that if you have to buy and manage hardware and software, what you’re looking at isn’t really cloud computing but a false cloud.
隨着雲計算越來越受歡迎,成千上萬的公司只是將其非雲產品和服務重新命名為“雲計算”。在評估雲產品時要牢記如果你必須購買和管理硬件和軟件,那么你所看到的並不是真正的雲計算,而是虛假的雲。
Last Takeaway
As a software engineer, you will be part of a larger body of computer science, which encompasses hardware, operating systems, networking, data management and mining, and many other disciplines. The more engineers in each of these disciplines understand about the other disciplines, the better they will be able to interact with those other disciplines efficiently.
As the operating system is the “brain” that manages input, processing, and output, all other disciplines interact with the operating system. An understanding of how the operating system works will provide valuable insight into how the other disciplines work, as your interaction with those disciplines is managed by the operating system.
寫在最后:
作為一個軟件工程師,你將學習計算機科學中的一部分,比如硬件、操作系統、網絡、數據管理和挖掘等很多其他學科。每個學科的工程師如果對其他學科的知識了解的越多,那么你也就能更加高效的與其他工程師合作完成工作。
操作系統是管理輸入、處理和輸出的大腦,其他所有的學科都和操作系統有交互。了解操作系統是如何工作的能讓我們解決一些其他學科上面的問題,因為與其他學科都與操作系統有交互。
---------------分割線-----------------
翻譯小結:
之前在網上看見這文章,覺得很不錯。讓我對一些操作系統的概念有了更加深刻的理解。作為軟件工程師,我覺得操作系統知識就像是一棟樓的地基一樣。不管是什么語言的工程師都必須要了解地基是怎樣工作的,才能寫出穩定可靠的摩天大樓。你如果不懂這些基礎知識,估計就只能建造一些簡單的平房,或者茅草屋了。
前面看了一點,受限於英文水平,邊查邊看。看的很辛苦,並且很多詞匯翻譯完過一會就忘了,於是萌生了將整篇文章翻譯並記錄下來的想法。一方面,強迫自己去理解這里面的一些感念,另一方面也提高一下自己的英文水平。
前后花了跨度有兩個星期吧,工作之余有時間就翻譯一會。前面還好,后面越來越煩躁,有一些段落都是直接用谷歌翻譯的,再想辦法理通順。甚至一度不想翻譯了,但想想既然做了,就做完這件事吧。勉勉強強的完成一件事情也比中途放棄一件事要好。
我知道很多地方翻譯的很渣,所以強烈建議讀者閱讀英文理解它,我在很多我自己不確定的地方后面都有括號說明。如果有更好的翻譯歡迎交流,我可以在上面修改,以便呈現出更好的文章給大家。