Open mandrenguyen opened 1 year ago
assign core, reconstruction
New categories assigned: core,reconstruction
@Dr15Jones,@jfernan2,@makortel,@mandrenguyen,@smuzaffar you have been requested to review this Pull request/Issue and eventually sign? Thanks
A new Issue was created by @mandrenguyen Matthew Nguyen.
@rappoccio, @antoniovilela, @sextonkennedy, @makortel, @smuzaffar, @Dr15Jones can you please review it and eventually sign/assign? Thanks.
cms-bot commands are listed here
The problem is that the cmsRun process itself gets a segfault while being profiled by Igprof. The same segfault might happen when being profiled with Vtune.
In case IgProf+cmsRun combination crashes, is any information on the job timings saved that can be used for comparison?
Usually the FastTimerService job completes and the average per module is contained in the raw json file if the resources piechart is not readable.
The IgprofService dumps the profile after the first, middle and next to last event. The first one might not have enough data to be meaningful.
@mandrenguyen Can you point me to a PR so I can look at the logs.
@gartung The last one we tried with profiling enabled was: https://github.com/cms-sw/cmssw/pull/43107
The crashes under profilers are quite likely caused by the memory corruption inside Tensorflow (when ran through IgProf or VTune) that has been investigated in https://github.com/cms-sw/cmssw/issues/42444.
The FastTimer Service should suffice. Still It seems not active in RelVals
The issue seems to come from igprof, which apparently can no longer really be supported. [...] is to try to move to VTune.
for my education is this replacement documented somewhere ?
I still see igprof
listed in the RecoIntegration CMS Twiki.
@mmusich it is expected that VTune gives the same problem as igprof, so the replacement has not been done. Indeed this is a real showstopper for RECO developments since we cannot monitor the time profiling in PRs
so the replacement has not been done. Indeed this is a real showstopper for RECO developments since we cannot monitor the time profiling in PRs
I see, that's bad news. I gather the same holds true for user checks when developing (regardless of the time profiling in PRs)
Is the most burning problem that there is no timing information (entire job, per module) or that the real IgProf/VTune profile (with function-level information) is missing (because of crash)?
Is the most burning problem that there is no timing information (entire job, per module) or that the real IgProf/VTune profile (with function-level information) is missing (because of crash)?
for me (personally) at least, having the function level information would be really helpful.
IMHO the crash of igprof/Vtune is a problem although there is timing info from FastTimer module, but the real issue is not having a comparison of baseline time performance vs baseline+PR, which force us to detect a posteriori total increases in the profiles when a prerelease is built, and then figure out which PR(s) were responsible....
Perhaps a comparison script based on FastTimer output could be useful even if not optimal, do you think this is possible @gartung ? Thanks
Yes it would be possible. In fact there is a script already that merges two FastTimer output files https://github.com/fwyzard/circles/blob/master/scripts/merge.py Changing that to diff two files should be possible.
You can try this script
https://raw.githubusercontent.com/gartung/circles/master/scripts/diff.py
If you add enable profiling
as a comment on the pull request the FastTimer service is run as part of the profiling.
Thanks, but the real need is to have the comparison in the PR, to see the changes introduced, the same way we had with igprof
enable profiling at this point only runs the FastTimer in the PR FW, but gives no comparison of time. which is what allows to decide
I am working on a pull request to cms-bot that will add the diff.py script and run it when enable profiling
is commented. The PR timing and the diff of the PR timing vs the IB release will be linked in the summary.
https://github.com/cms-sw/cms-bot/pull/2343
This script has been added to pull request profiling and produces an html table of all of the modules in the resources file and their differences https://raw.githubusercontent.com/cms-sw/cms-bot/refs/heads/master/comparisons/resources-diff.py
Thank you very much @gartung I will give it a try and give you feedback
AT present testing it on: https://github.com/cms-sw/cmssw/pull/46133
@gartung the new FastTimer diff script is working fine, see for example [1][2]. I have some feedback:
Thank you very much
[1] https://cmssdt.cern.ch/SDT/jenkins-artifacts/pull-request-integration/PR-232398/41851/profiling/29834.21/diff-step3_cpu.resources.json.html [2] https://cmssdt.cern.ch/SDT/jenkins-artifacts/pull-request-integration/PR-d5463d/41850/profiling/29834.21/diff-step3_cpu.resources.json.html [3] https://cmssdt.cern.ch/SDT/jenkins-artifacts/pull-request-integration/PR-d5463d/41850/profiling/index-29834.21.html
due to initialization times reading inputs we are having too many false positives
Do you mean something along the the baseline being run first, and that leading the input file being cached in memory?
If we are to go to that level of precision, I'd suggest to dd
the input file to /dev/null
first (to cache it in memory), and run one warmup job (could be the same number of events as the actual profiling job, or less) prior the actual measurement.
In the table, I'd suggest to also add units to both time and memory, and in the cells present first the baseline and then the PR value (but keep the difference as "PR - baseline", as we do in the maxmemory table).
due to initialization times reading inputs we are having too many false positives
Do you mean something along the the baseline being run first, and that leading the input file being cached in memory?
If we are to go to that level of precision, I'd suggest to
dd
the input file to/dev/null
first (to cache it in memory), and run one warmup job (could be the same number of events as the actual profiling job, or less) prior the actual measurement.
I would not seek precisión, but something which allows to tell if there is a real change or not. It seems to me that with 10 events we are left to stats fluctuations which give more than 3-4% difference in about 90% of the modules being compared (orange everywhere).
Moreover, fluctuations seem to make some modules to increase and others to decrease, so perhaps a global total value of time per event as summary makes more sense if we cannot get rid of these ups and downs
Would a sum over module labels per module type give a better indication? The reco-event-loop shows the time in each module type's produce method.
I think so, at least as a first result to see at a glance if timing is increased or not, the module by module info is still necessary to spot culprits. However I still believe 10 event jobs have very large uncertainty
The variance in module timing might be caused by more than one Jenkins job running on vocms11 at the same time. I can restrict the baseline and PR profiling jobs so that only one at a time can run on vocms11.
I determined that the IB profiling was being run on cmsprofile-11 and the PR profiling was being run on vocms011. The former is a multicore VM, the later is a bare metal machine. This could also account for the differences.
Is the "IB profiling" the same as "PR test baseline profiling"? I'm asking because for DQM/reco comparison purposes the "IB" and "PR test baseline" are different things.
There is no profiling done for baseline. The comparison is with the corresponding IB.
The variance in module timing might be caused by more than one Jenkins job running on vocms11 at the same time. I can restrict the baseline and PR profiling jobs so that only one at a time can run on vocms11.
vocms011 is also used to monitor RECO profiling on new releases, so it is used centrally from time to time. What I don't understand is why the several simultaneous jobs might cause an almost random timing across modules
What I don't understand is why the several simultaneous jobs might cause an almost random timing across modules
For example, modern CPUs adjust their operating frequency based on the load of the machine. Other processes may also interfere with e.g. disk, memory, and/or network usage.
Yes, I see that, but that would shift all the modules in one direction, not in a random way, am I right?
another effect is the OS can 'steal' a CPU from a process to use for something else temporarily. The heavier the load on the machine, the more likely this is to happen. If it happens while a module were running, it would make the time between the start and stop of the module longer than it would have been without the interruption.
For a comparison of the IB timing and memory FastTimerService numbers you can also look at https://monit-grafana.cern.ch/d/000000530/cms-monitoring-project?viewPanel=60&orgId=11 which is the link on the IB dashboard labeled "IB profiling results"
Thanks @gartung However IBs have many changes in, we need to inspect PR by PR to distangle in an adiabatic way... I would still push to implement these suggestions though: https://github.com/cms-sw/cmssw/issues/43166#issuecomment-2387870861
I tried using the step2.root produced by the IB profiling build with the IB+PR profiling in step3. I also tried to account for timing differences from running on two different VMs by dividing the time per module by the total time. The percentage difference is calculated from these fractions of the total time. I still see large percentage differences.
https://github.com/cms-sw/cms-bot/pull/2366
https://cmssdt.cern.ch/SDT/jenkins-artifacts/pull-request-integration/PR-018b9a/42603/profiling/29834.21/diff-step3_cpu.resources.json.html
I will next try profiling the IB on the same machine to remove timing differences from running on two different VMs.
I am also working on summing metrics across module labels for each module type.
Running the IB and IB+PR profiling on the same VM still results in diffrerences https://cmssdt.cern.ch/SDT/jenkins-artifacts/pull-request-integration/PR-018b9a/42636/profiling/29834.21/diff-step3_cpu.resources.json.html
With the number of events increased to 100 the variance between two runs of step 3 on the same VM is less https://cmssdt.cern.ch/SDT/jenkins-artifacts/pull-request-integration/PR-018b9a/42636/profiling/12634.21/diff-step3_cpu.resources.json.html
Since a few months we are not able to see the CPU impact of a given pull request, which used to be possible with the
enable profiling
option in the Jenkins tests. This is a bit problematic for integrating new features, as we won't easily be able to keep track of changes in performance until a pre-release is built. The issue seems to come from igprof, which apparently can no longer really be supported. One suggestion from @gartung is to try to move to VTune.