linux-test-project / lcov

LCOV
GNU General Public License v2.0
894 stars 240 forks source link

speed is too slowly #167

Closed Prines closed 1 year ago

Prines commented 1 year ago

When I changed from gcc492 to gcc921, my lcov version changed from 1.14 to 1.16, but the generation of coverage reports was very slow. Have you ever encountered this situation,

henry2cox commented 1 year ago

By “generation of coverage reports” – do you mean the ‘genhtml’ step? Or do you mean the “lcov –capture” step, or the ‘lcov –add’ step (if you need that one)? Can you quantify exactly the performance difference – and probably also share your command line.

Short answer is “yes” – I have seen slowdowns in the past (but not necessarily between 1.14 and 1.16). Are you using branch coverage, by chance (that did go very very slow at some point)? Another issue from the past was the JSON module – some of which are very, very slow.

The other thing you could try is the lcov version mentioned in PR #86. That one implements both a “--parallel” flag (to tell the tool to split up the job as much as possible and distribute to all the cores on your machine) as well as a “--profile” flag (to tell the tool to dump some data to tell us where it is spending time). In our environment, parallel processing gives performance that is nearly linear in number of cores (up to about 30 – then starts to tail off). Large jobs benefit more than small jobs – merely because there is more to do so more opportunities for parallelism.

Of course - depending on your situation and your IT group – you may or may not be able to install and/or use a tool version you found somewhere on the internet – so this might not help you.

Henry

Prines commented 1 year ago

both "lcov–capture" and "genhtml" step become slower,Is there the function of multithreading

Prines commented 1 year ago

my commond is lcov --rc lcov_branch_coverage=1 -c -d /tmp/coverage/gcda/ebs/build --exclude '/apsara/alicpp/' --exclude '/mag/workspace/ebs/build/' --exclude '/mag/workspace/ebs/apsara_built/*' -G /code/janus.tools/gcov -o /tmp/coverage/info/utstnotdc-1-1-groups30-cover-info.json, my compilation product is very large

henry2cox commented 1 year ago

Try adding “--rc geninfo_gcov_all_blocks=0” to your capture command line:

lcov--rc lcov_branch_coverage=1 –rc geninfo_gcov_all_blocks=0 …

My experience was that the ‘capture’ step would go away and never come back without the above flag. It sounds as if your project is behaving differently, though: running slowly, but not hung/in an infinite loop/taking huge time. Since your symptoms seem different, you may be seeing a different problem – so the above flag might not help you.

With respect to your other question: yes. The pull request referred in my earlier reply implements multi-processing. (Perl doesn’t really do multithread – but does implement multiple processes quite nicely.)

For curiosity: how large is “very large” – say, in terms of LOC, files, or unique code-containing directories. All of these affect run time CPU performance – and also affect peak memory requirements and both intermediate and final disk storage – all of which might affect you. Parallel execution does increase the intermediate storage requirement (both memory and disk) – and also increases network load in your compute farm. Your IT guys may or may not love you.

Henry

henry2cox commented 1 year ago

Just following up Is this still an issue - or did either the "geninfo_gcov_all_blocks=0" setting resolve the problem? Similarly - did parallelization help - or are there still issues? Thanks Henry

henry2cox commented 1 year ago

Just following up: is this issue resolved, or is it still open?

henry2cox commented 1 year ago

Without further information, I'm going to assume that this issue is resolved - and will mark it closed next week. If the issue is NOT resolved: please describe what is currently happening and what you think the correct/desired behaviour should be. Please use the current master TOT for testing. Thanks Henry