I was wondering if there’s any strong reason why we’re using the make generator for cmake instead of e.g. ninja ?
I’ve been playing a bit with ninja lately and must say I like how it seems to be quite a bit quicker for recompilations (i.e. during regular dev cycle, after initial build, when you modify a file, re-compile, re-modify, re-compile, etc…).
To be really thorough one might imagine to check that all build artifacts (bin,libs) are actually the same for the make and ninja cases. Might actually do that next, as it might be useful if we want to evaluate more build systems at some point ?
Can we get the report somewhere? Does it include compilation time only or also the overhead? I’ve the impression that we have something stupid (e.g. starting bash with interactive configuration) which dominates over compile time.
As far as I can tell, I think the report covers everything. In the report, if you go to the “command plot” dropdown menu you’ll see 3 main area : c++ compilation (green), cd (red, which is not cd but dictionary / rootmap generation AFAIK), and “:” (light blue) which I’ve not figured out yet what it is ;-). Using the “command table” you get the numbers for each command (the number are the unparallel ones if I get it correctly, e.g. the total build is not 35minutes !)
@dberzano : I’m no expert on shake itself (just installed it myself last week), but basically, once shake is installed (this requires the haskell platform…) and cmake has generated ninja files, it’s just a matter of doing shake -jN --profile (the -j is important as shake is using 1 core by default where ninja for instance is using everything available by default), so I’d say it should be easy to integrate, yes.