I was wondering if there’s any strong reason why we’re using the make generator for cmake instead of e.g. ninja ?
I’ve been playing a bit with ninja lately and must say I like how it seems to be quite a bit quicker for recompilations (i.e. during regular dev cycle, after initial build, when you modify a file, re-compile, re-modify, re-compile, etc…).
A proper Ninja recipe, picking it from the system if possible, installing it if not (use prefer_system appropriately). We should properly check if the right version is installed from the system there.
A proper O2 recipe update (let’s start with it maybe) to use Ninja.
Concerning the last point, I think we can, maybe, experiment with the Common repository for starters (there’s no Fortran code there and updates are not so frequent, making it an excellent testbed).
We will need to tell CMake to generate code using Ninja:
cmake $SOURCEDIR -G Ninja blahblah...
Then instead of make and make install we can directly use:
The above cmake statements are generic and work also with GNU Makefiles. I honestly don’t know if the installation procedure can be sped up by running it in parallel.
I am especially curious to see how our macOS installation time improves (if so) with Ninja!
To be really thorough one might imagine to check that all build artifacts (bin,libs) are actually the same for the make and ninja cases. Might actually do that next, as it might be useful if we want to evaluate more build systems at some point ?
Actually, finding out where the time is spent in our build might be of some interest.
Don’t know how to do it with ninja proper but apparently shake is able to use ninja files and seems to have a very nice build profiling output (report.html, that is query-able)
Can we get the report somewhere? Does it include compilation time only or also the overhead? I’ve the impression that we have something stupid (e.g. starting bash with interactive configuration) which dominates over compile time.
As far as I can tell, I think the report covers everything. In the report, if you go to the “command plot” dropdown menu you’ll see 3 main area : c++ compilation (green), cd (red, which is not cd but dictionary / rootmap generation AFAIK), and “:” (light blue) which I’ve not figured out yet what it is ;-). Using the “command table” you get the numbers for each command (the number are the unparallel ones if I get it correctly, e.g. the total build is not 35minutes !)
c++
459 ×
35m24s
90.17%
cd
47 ×
3m01s
7.67%
:
183 ×
36.61s
1.55%
cmake
1 ×
8.37s
0.36%
cc
1 ×
0.13s
0.01%
@dberzano : I’m no expert on shake itself (just installed it myself last week), but basically, once shake is installed (this requires the haskell platform…) and cmake has generated ninja files, it’s just a matter of doing shake -jN --profile (the -j is important as shake is using 1 core by default where ninja for instance is using everything available by default), so I’d say it should be easy to integrate, yes.