部分構建與Visual C ++中的完整構建

[英]Partial builds versus full builds in Visual C++

For most of my development work with Visual C++, I am using partial builds, e.g. press F7 and only changed C++ files and their dependencies get rebuilt, followed by an incremental link. Before passing a version onto testing, I take the precaution of doing a full rebuild, which takes about 45 minutes on my current project. I have seen many posts and articles advocating this action, but wonder is this necessary, and if so, why? Does it affect the delivered EXE or the associated PDB (which we also use in testing)? Would the software function any different from a testing perspective?

對於我使用Visual C ++的大多數開發工作,我使用的是部分構建,例如按F7並僅更改C ++文件並重建其依賴項,然后是增量鏈接。在將版本傳遞給測試之前,我采取了完全重建的預防措施,這在我當前的項目上大約需要45分鍾。我看過很多帖子和文章都主張這個動作,但是這是必要的,如果有的話,為什么呢?它是否影響交付的EXE或相關的PDB(我們也在測試中使用)?軟件功能與測試角度有何不同?

For release builds, I'm using VS2005, incremental compilation and linking, precompiled headers.


6 个解决方案



Hasn't everyone come across this usage pattern? I get weird build errors, and before even investigating I do a full rebuild, and the problem goes away.


This by itself seems to me to be good enough reason to do a full rebuild before a release.


Whether you would be willing to turn an incremental build that completes without problems over to testing, is a matter of taste, I think.




The partial build system works by checking file dates of source files against the build results. So it can break if you e.g. restore an earlier file from source control. The earlier file would have a modified date earlier than the build product, so the product wouldn't be rebuilt. To protect against these errors, you should do a complete build if it is a final build. While you are developing though, incremental builds are of course much more efficient.


Edit: And of course, doing a full rebuild also shields you from possible bugs in the incremental build system.




The basic problem is that compilation is dependent on the environment (command-line flags, libraries available, and probably some Black Magic), and so two compilations will only have the same result if they are performed in the same conditions. For testing and deployment, you want to make sure that the environments are as controlled as possible and you aren't getting wacky behaviours due to odd code. A good example is if you update a system library, then recompile half the files - half are still trying to use the old code, half are not. In a perfect world, this would either error out right away or not cause any problems, but sadly, sometimes neither of those happen. As a result, doing a complete recompilation avoids a lot of problems associated with a staggered build process.

基本問題是編譯依賴於環境(命令行標志,可用庫,可能還有一些Black Magic),因此如果在相同條件下執行,則兩個編譯只會產生相同的結果。對於測試和部署,您希望確保環境盡可能受控制,並且由於奇怪的代碼而您沒有獲得古怪的行為。一個很好的例子是如果你更新一個系統庫,然后重新編譯一半的文件 - 一半仍在嘗試使用舊代碼,一半不是。在一個完美的世界中,這可能會立即出錯或者不會引起任何問題,但遺憾的是,有時這些都不會發生。因此,執行完整的重新編譯可避免與交錯構建過程相關的許多問題。



I would definitely recommend it. I have seen on a number of occasions with a large Visual C++ solution the dependency checker fail to pick up some dependency on changed code. When this change is to a header file that effects the size of an object very strange things can start to happen. I am sure the dependency checker has got better in VS 2008, but I still wouldn't trust it for a release build.

我肯定會推薦它。我已經在很多場合看到過使用大型Visual C ++解決方案,依賴檢查器無法獲得對已更改代碼的某些依賴。當這個改變是一個影響對象大小的頭文件時,很奇怪的事情就會開始發生。我確信依賴檢查器在VS 2008中有所改進,但我仍然不相信它的發布版本。



The biggest reason not to ship an incrementally linked binary is that some optimizations are disabled. The linker will leave padding between functions (to make it easier to replace them on the next incremental link). This adds some bloat to the binary. There may be extra jumps as well, which changes the memory access pattern and can cause extra paging and/or cache misses. Older versions of functions may continue to reside in the executable even though they are never called. This also leads to binary bloat and slower performance. And you certainly can't use link-time code generation with incremental linking, so you miss out on more optimizations.


If you're giving a debug build to a tester, then it probably isn't a big deal. But your release candidates should be built from scratch in release mode, preferably on a dedicated build machine with a controlled environment.




Visual Studio has some problems with partial (incremental) builds, (I mostly encountered linking errors) From time to time, it is very useful to have a full rebuild.

Visual Studio在部分(增量)構建方面存在一些問題(我經常遇到鏈接錯誤)。有時候,完全重建是非常有用的。

In case of long compilation times, there are two solutions:


  1. Use a parallel compilation tool and take advantage of your (assumed) multi core hardware.
  2. 使用並行編譯工具並利用您的(假定的)多核硬件。

  3. Use a build machine. What I use most is a separate build machine, with a CruiseControl set up, that performs full rebuilds from time to time. The "official" release that I provide to the testing team, and, eventually, to the customer, is always taken from the build machine, not from the developer's environment.
  4. 使用構建機器。我最常用的是一台單獨的構建機器,設置了CruiseControl,可以不時執行完全重建。我向測試團隊提供的“官方”版本,以及最終提供給客戶的版本,總是來自構建機器,而不是來自開發人員的環境。



  © 2014-2022 ITdaan.com 联系我们: