Open psalz opened 1 year ago
Yes, that would be a nice feature! I think GitHub runners only get something like 2 cores, but that would probably be enough to get a decent speed-up
I've been having a related thought - because github is limited to either 25 comments or 10 annotations, it doesn't always make sense to run clang-tidy over the entire set of changes because that could easily produce far more feedback than github can handle.
In light of this, what about not calling clang-tidy once over the entire set of changes, but strike a middle ground, call it per-file and stop when the output limit is reached?
In light of this, what about not calling clang-tidy once over the entire set of changes, but strike a middle ground, call it per-file and stop when the output limit is reached?
I'm not sure whether there is any performance benefit to checking multiple files at once instead of one after another, but in general I think that's a good idea, yes! The more substantial changes to the script's logic (invoking clang-tidy multiple times and merging the results) could then probably be shared between these two optimizations.
If a patch touches lots of files the action can actually take quite a while to complete, we are sometimes seeing times upward of 40 minutes. An option to parallelize across input files would therefore be greatly appreciated! It is my understanding that there is already some sort of diagnostic-deduplication happening, so this could probably be applied to duplicate results from parallel execution as well (i.e. for diagnostics generated in shared headers).