[英]Time waste of execv() and fork()

I am currently learning about fork() and execv() and I had a question regarding the efficiency of the combination.


I was shown the following standard code:


pid = fork();
if(pid < 0){
    //handle fork error
else if (pid == 0){
    execv("son_prog", argv_son);
//do father code

I know that fork() clones the entire process (copying the entire heap, etc) and that execv() replaces the current address space with that of the new program. With this in mind, doesn't it make it very inefficient to use this combination? We are copying the entire address space of a process and then immediately overwrite it.


So my question:
What is the advantage that is achieved by using this combo (instead of some other solution) that makes people still use this, even though we have waste?


5 个解决方案



What is the advantage that is achieved by using this combo (instead of some other solution) that makes people still use this even though we have waste?


You have to create a new process somehow. There are very few ways for a userspace program to accomplish that. POSIX used to have vfork() alognside fork(), and some systems may have their own mechanisms, such as Linux-specific clone(), but since 2008, POSIX specifies only fork() and the posix_spawn() family. The fork + exec route is more traditional, is well understood, and has few drawbacks (see below). The posix_spawn family is designed as a special purpose substitute for use in contexts that present difficulties for fork(); you can find details in the "Rationale" section of its specification.

你必須以某種方式創造一個新的過程。用戶空間程序很少有辦法實現這一點。POSIX過去有vfork() alognside fork(),一些系統可能有自己的機制,比如特定於linux的克隆(),但是自2008年以來,POSIX只指定fork()和posix_spawn()家族。fork + exec路由更傳統,易於理解,並且沒有什么缺點(參見下面)。posix_spawn家族被設計成在出現fork()困難的上下文中用於特殊目的的替代;您可以在其規范的“基本原理”部分找到詳細信息。

This excerpt from the Linux man page for vfork() may be illuminating:


Under Linux, fork(2) is implemented using copy-on-write pages, so the only penalty incurred by fork(2) is the time and memory required to duplicate the parent’s page tables, and to create a unique task structure for the child. However, in the bad old days a fork(2) would require making a complete copy of the caller’s data space, often needlessly, since usually immediately afterwards an exec(3) is done. Thus, for greater efficiency, BSD introduced the vfork() system call, which did not fully copy the address space of the parent process, but borrowed the parent’s memory and thread of control until a call to execve(2) or an exit occurred. The parent process was suspended while the child was using its resources. The use of vfork() was tricky: for example, not modifying data in the parent process depended on knowing which variables are held in a register.


(Emphasis added)


Thus, your concern about waste is not well-founded for modern systems (not limited to Linux), but it was indeed an issue historically, and there were indeed mechanisms designed to avoid it. These days, most of those mechanisms are obsolete.




Another answer states:


However, in the bad old days a fork(2) would require making a complete copy of the caller’s data space, often needlessly, since usually immediately afterwards an exec(3) is done.

Obviously, one person's bad old days are a lot younger than others remember.


The original UNIX systems did not have the memory for running multiple processes and they did not have an MMU for keeping several processes in physical memory ready-to-run at the same logical address space: they swapped out processes to disk that it wasn't currently running.


The fork system call was almost entirely the same as swapping out the current process to disk, except for the return value and for not replacing the remaining in-memory copy by swapping in another process. Since you had to swap out the parent process anyway in order to run the child, fork+exec was not incurring any overhead.


It's true that there was a period of time when fork+exec was awkward: when there were MMUs that provided a mapping between logical and physical address space but page faults did not retain enough information that copy-on-write and a number of other virtual-memory/demand-paging schemes were feasible.


This situation was painful enough, not just for UNIX, that page fault handling of the hardware was adapted to become "replayable" pretty fast.




Not any longer. There's something called COW (Copy On Write), only when one of the two processes (Parent/Child) tries to write to a shared data, it is copied.


In the past:
The fork() system call copied the address space of the calling process (the parent) to create a new process (the child). The copying of the parent's address space into the child was the most expensive part of the fork() operation.


A call to fork() is frequently followed almost immediately by a call to exec() in the child process, which replaces the child's memory with a new program. This is what the the shell typically does, for example. In this case, the time spent copying the parent's address space is largely wasted, because the child process will use very little of its memory before calling exec().


For this reason, later versions of Unix took advantage of virtual memory hardware to allow the parent and child to share the memory mapped into their respective address spaces until one of the processes actually modifies it. This technique is known as copy-on-write. To do this, on fork() the kernel would copy the address space mappings from the parent to the child instead of the contents of the mapped pages, and at the same time mark the now-shared pages read-only. When one of the two processes tries to write to one of these shared pages, the process takes a page fault. At this point, the Unix kernel realizes that the page was really a "virtual" or "copy-on-write" copy, and so it makes a new, private, writable copy of the page for the faulting process. In this way, the contents of individual pages aren't actually copied until they are actually written to. This optimization makes a fork() followed by an exec() in the child much cheaper: the child will probably only need to copy one page (the current page of its stack) before it calls exec().




It turns out all those COW page faults are not at all cheap when the process has a few gigabytes of writeable RAM. They're all gonna fault once even if the child has long since called exec(). Because the child of fork() is no longer allowed to allocate memory even for the single threaded case (you can thank apple for that one), arranging to call vfork()/exec() instead is hardly more difficult now.


The real advantage to the vfork()/exec() model is you can set the child up with an arbitrary current directory, arbitrary environment variables, and arbitrary fs handles (not just stdin/stdout/stderr), an arbitrary signal mask, and some arbitrary shared memory (using the shared memory syscalls) without having a twenty-argument CreateProcess() API that gets a few more arguments every few years.

真正的優勢vfork()/ exec()模型可以用任意設置孩子當前目錄,任意環境變量,和任意fs處理(不僅僅是stdin、stdout和stderr),任意信號掩碼,和任意共享內存(使用共享內存系統調用)沒有twenty-argument CreateProcess()API,每隔幾年就得到更多的參數。

It turned out the "oops I leaked handles being opened by another thread" gaffe from the early days of threading was fixable in userspace w/o process-wide locking thanks to /proc. The same would not be in the giant CreateProcess() model without a new OS version, and convincing everybody to call the new API.

原來,在線程早期,“哎呦,我漏了句柄被另一個線程打開”的錯誤在userspace w/o進程范圍的鎖定中是可以修復的,這多虧了/proc。在巨大的CreateProcess()模型中,如果沒有一個新的OS版本,並且說服每個人調用新的API,就不會出現這種情況。

So there you have it. An accident of design ended up far better than the directly designed solution.




A process created by exec() et al, will inherit its file handles from the parent process (including stdin, stdout, stderr). If the parent changes these after calling fork() but before calling exec() then it can control the child's standard streams.




粤ICP备14056181号  © 2014-2020 ITdaan.com