Currently it's hard to determine the real performance of a CPG translation especially relative to the size of the input. We do have absolute values that are stored in the StatisticsHolder and can be printed at the end of the run (which actually might be nice to enable with an option). But the better comparison would be the time it took in relation to the LoC of the input. We do have a rather bad LoC metric for C++ but nothing for the other languages. Since we usually parsing our file input ourselves (with the exception of C++), an agnostic solution would be to calculate the LoC based on the file input.
Some nice things to consider:
Add an option to enable automatic printing of the results using BenchmarkResults.print
Gather the total number of nodes
Gather the number of problems, possibly even the number of "categories" or problem texts that appear
ok it seems I already implemented some of these things already in the StatisticsCollectionPass and we totally forgot about this pass, so we should just enable it then...
Currently it's hard to determine the real performance of a CPG translation especially relative to the size of the input. We do have absolute values that are stored in the
StatisticsHolder
and can be printed at the end of the run (which actually might be nice to enable with an option). But the better comparison would be the time it took in relation to the LoC of the input. We do have a rather bad LoC metric for C++ but nothing for the other languages. Since we usually parsing our file input ourselves (with the exception of C++), an agnostic solution would be to calculate the LoC based on the file input.Some nice things to consider:
BenchmarkResults.print