Open paulharris opened 8 years ago
In interactive mode, it prints both when jobs start and when they end. See #1010 (and the mailing list thread linked from there) for discussions on this.
What did you do previously when you say that a compile hung? I usually ssh in and diagnose locally in these cases, and then I can just run ps
to see what's going on, so for me this isn't a huge drawback (and compiler hangs are fairly rare). If there was a switch, how would you use that?
+1 -- I actually preferred old behavior, where I can always see which jobs have been started. It's too bad that there is no flag to preserve it. Interactive mode is a workaround, but it is much more noisy (if one just cares about the started jobs)
Sorry @nico, only now I see your question. re compiler-hung, there was a period of time when the compiler would often get stuck on a file. I'd be able to see it in the gvim console and ctrl-c to kill the command, change something and compile again. When I hit this problem, it would be a frequent problem for the day, until a workaround could be figured out.
It is also helpful to see it start compiling "reallyhuge.cpp" files, and at that point I might break the compile and continue coding, rather than wait for that huge file to finish compiling.
If I could change ninja to print out when it starts a job, I'd do something like this in gvim:
:makeprg=ninja\ \-printonstart\ ../build
Interactive mode is less nice because it prints the commands twice, whereas one generally wants to see them once. I feel that this change makes ninja less nice to use on the command terminal (i.e. direct interaction with user, much like the default output of 'make') to improve usability for external applications that parse the output (i.e. indirect interaction with user).
I don't understand why it can't be nice to use for both cases? (e.g. with a command line flag that switches to the old behavior as suggested by @paulharris)
Stupid question, but, what is this "interactive mode" ? I'm using version 1.7.1, and it either prints out the finished jobs (on a dumb terminal), or just keeps printing out what its doing on one line, like a status bar ... no list of started or finished jobs.
This would also be solved with #1026 to print running jobs when interrupted by a signal
@colincross: that would only address the situation where ninja is interrupted. It doesn't fix the issue that with this new behavior, ninja has become less nice to use on the command line when one wants to see jobs as they are started.
It solves the original issue (hung compilers).
For long running tools, the output of #1020 is much nicer, it tells you when the job started and clears it from the table when it finishes.
Yes -- sadly that PR has been sitting there for over a year..
I made a related attempt in #1191 that is a small[er] change and I believe allows for writing #1020 as a simpler filter wrapping ninja output, instead of embedding all of that.
re compiler-hung, there was a period of time when the compiler would often get stuck on a file.
If I understand it correctly, the problem in that case is, that ninja might show a job which has already finished on the status line, so it isn't apparent which job has hang. In my frontend I've fixed that by updating the status line with a still running job whenever a job finishes: https://github.com/jhasse/ja/commit/e01efe2c4c3c93d4f4ab6eff81fa1467ca5731a4#diff-987df73aeefc3bfb7b87e9dd2b76f354
It is also helpful to see it start compiling "reallyhuge.cpp" files, and at that point I might break the compile and continue coding, rather than wait for that huge file to finish compiling.
You might want to check out my frontend https://bixense.com/ja/ :) It'll immediately return on the first failure and that huge file continues to compile in the background.
Because I spent way too long trying to work around this: Run ninja -v
and then killall clang-10
(or whatever your compiler is). This will print actionable information in case of a hang.
This is a usability bug that ninja
doesn't print the command line before the command is executed. One has to run ps
to see what ninja
actually is executing.
+1, this is perhaps by biggest gripe with ninja
:/
+1, this is perhaps by biggest gripe with
ninja
:/
That was a mistake GNU Make made a couple of decades ago and fixed sinse then.
Such a mistake makes build debugging less straightforward and more time consuming. As well as providing no feedback for the user about what is currently executing.
However, that's still ninja's design rationale, something deemed broken in GNU Make, see https://ninja-build.org/manual.html#_comparison_to_make:
Command output is always buffered. This means commands running in parallel don’t interleave their output, and when a command fails we can print its failure output next to the full command line that produced the failure.
That same page claims that, unlike GNU Make, ninja
allows for:
A build edge may have multiple outputs.
Which never corresponded to reality, GNU Make supported multiple outputs for at least two decades, see https://www.gnu.org/software/make/manual/make.html#Multiple-Targets
Output directories are always implicitly created before running the command that relies on them.
Target directories get created with order-only dependencies in GNU Make and being explicit about that detects errors earliest.
But fundamentally, make has a lot of features: suffix rules, functions, built-in rules that e.g. search for RCS files when building source.
Built-in/suffix rules are disabled with -r
command line option, which should always be used with modern Makefiles. In the Makefile .SUFFIXES: # Delete the default suffixes
.
From that same page again:
It is born from my work on the Chromium browser project, which has over 30,000 source files and whose other build systems (including one built from custom non-recursive Makefiles) would take ten seconds to start building after changing one file. Ninja is under a second.
That is very nice. However, without breakdown of what took how much time, and without reproduction, that's, at best, a statistical study with sample size of one, also known as anecdote.
I built a non-recursive build system in 2009 from scratch using GNU Make for a C++ project of similar size and it took 1.5 seconds for make
to say that there was nothing to rebuild when no source file has changed since the last build. Another anecdote without reproduction.
Fundamentally, ninja
cannot possibly build faster than make
because the bottleneck is in getting file timestamps using stat
/statx
syscalls, which has been well understood for decades. GNU Make checking built-in/suffix rules to rebuild every source file including Makefile is optional functionality, it takes -r
command line option to completely disable that, or empty (pattern) rules to override the built-ins. In other words, build.ninja
converted to Makefile
executes as fast and supports all ninja functionality except discovering extra dependencies at build time, but not the other way around.
My personal opinion is that ninja
developers don't bother reading documentation of the tool they wanted to replace and, hence, still make big false claims about core GNU Make functionality, all the while repeating the same mistakes GNU Make made and fixed two or more decades ago. Which makes me think that ninja
was made for wrong reasons mostly; and that using software made by developers who obviously don't read documentation and don't learn from prior art mistakes is a risk with no upside.
That was a mistake GNU make made a couple of decades ago and fixed sinse then. ...
Command output is always buffered. This means commands running in parallel don’t interleave their output, and when a command fails we can print its failure output next to the full command line that produced the failure.
GNU Make added support for output buffering in 4.0, and the default output buffer mode appears to act just like ninja. In general it's a tradeoff between seeing what's running and having reasonable output in parallel builds.
Which never corresponded to reality, GNU Make supported multiple outputs for at least two decades, see https://www.gnu.org/software/make/manual/make.html#Multiple-Targets
GNU Make didn't add the equivalent functionality (grouped targets in explicit rules) until version 4.3 a few years ago.
Which never corresponded to reality, GNU Make supported multiple outputs for at least two decades, see https://www.gnu.org/software/make/manual/make.html#Multiple-Targets
GNU Make didn't add the equivalent functionality (grouped targets) until version 4.3 a few years ago.
You haven't read the link in full.
Multiple-target pattern rules were first documented in 1988.
One might argue that multiple outputs were supported only for pattern rules, like %.o %.d: %.c
, which could be too much of a constraint for the output name to share %
stem with the prerequisite. The assumption was that when multiple outputs are produced from one input, then it is reasonable for outputs to share a (%
stem) part of the input filename. However, it only takes an extra rule to symlink/copy a file to any other desired name. GNU Make version 4.3 only removed the pattern rule constraint.
That was a mistake GNU make made a couple of decades ago and fixed sinse then. ...
Command output is always buffered. This means commands running in parallel don’t interleave their output, and when a command fails we can print its failure output next to the full command line that produced the failure.
GNU Make added support for output buffering in 4.0, and the default output buffer mode appears to act just like ninja. In general it's a tradeoff between seeing what's running and having reasonable output in parallel builds.
The default output buffering for GNU Make is none
. That's exactly unlike ninja.
Ninja should provide buffering options, like GNU Make does. Or just line
buffering otherwise.
In general it's a tradeoff between seeing what's running and having reasonable output in parallel builds.
Such a trade-off for a build system makes it harder to notice increased command execution times, diagnose errors and debug builds; for the benefit of not interleaving outputs of different commands. The latter is hardly a useful benefit. That's a wrong trade-off and, often, the number one reason for ditching ninja
when people waste a day debugging a broken build only to discover that ninja
doesn't print the command before executing it - a bug reported in 2016 and not fixed till this day.
Whereas GNU Make doesn't do any buffering by default, but can do line
, target
, recurse
buffering when explicitly requested, since 2013 - facts stated in the links you kindly provided. Ninja only ever does the equivalent of GNU Make target
buferring, yet claims that is superior to what GNU Make does. Another big false claim or ininformed opinion of Ninja developers, contadicting freely available GNU Make documentation.
The old ninja printed the command/task when it started. Since 1.7 or so, it changed to print when they were finished, so that the build output is printed along with the task.
This is a good change, however, Sometimes the compiler gets stuck in an infinite loop, and the old ninja's behaviour was handy because I could tell which file it was getting stuck on.
Also, I could see which tasks it was processing, and I could mentally estimate how long the tasks would take.
Maybe it could have a 'status bar' style output, that shows what jobs are currently building? eg aptitude / apt-get / parallel ftp / parallel network transfers tend to have such a status bar.
Otherwise, would be good to have a switch to print the tasks as they start, for situations where the compiler is causing problems.