madgraph5 / madgraph4gpu

GPU development for the Madgraph5_aMC@NLO event generator software package
30 stars 32 forks source link

processes for the paper #344

Open oliviermattelaer opened 2 years ago

oliviermattelaer commented 2 years ago

I would suggest the following processes for the paper:

  1. g g > t t~ g
  2. g g > t t~ g g
  3. g g > t t~ g g g

In term of processes to use to check that the code can handle most of the cases

  1. import model heft; generate g g > h
  2. generate u u~ > d d~
valassi commented 2 years ago

Hi Olivier, thanks. I would suggest adding also eemumu - first because it is quite different (much lighter in computations) and may have some interesting numbers, and second because this is what we had publiched in the CHEP proceedings, so it can be interesting to compare tp those numbers. What do you think?

jtchilders commented 2 years ago

looks good to me.

valassi commented 2 years ago

I am documenting a few specificities of ggttggg in #346. Feel free to add more observations please!

valassi commented 2 years ago

I have just merged PR #345. This contains a couple of useful things for the paper, following Olivier's suggestions

The summary of all results for the five processes I look at (eemumu, ggtt+0,1,2,3g) is here: https://github.com/madgraph5/madgraph4gpu/blob/master/epochX/cudacpp/tput/summaryTable.txt

I tried several combinations

There are quite a few differences, still to be understood/tweaked, between the two compilers and the two inlining options, but I would consider the baseline for our comparison.

Note that I give one CUDA number, and several SIMDs for C++. The nice thing is that a factor 4 between no SIMD and 512y SIMD for double (and a factor 8 for float) seems always there, also for ggttggg. The baseline of the baseline is thw single CUDA result, and the single 512y/C++ result.

As the complexity increases and the tests take longer, I reduce the number of events (or gpublocks/threads) for ggttggg. For C++, even a few events would be enough to reach plateau performance, but I always try to run a reasonable number for CUDA too. I always run in CUDA the same number of events as in C++, to compare the ME value. But in CUDA, for ggttgg and ggttggg I do a second run with more gpublocks/threads, to reach the plateau. Typically for V100 this is 64 blocks and 256 threads as bare minimum (below the performance always drops by factors). The detailed configs are here https://github.com/madgraph5/madgraph4gpu/blob/956f0e6c02e08ad13de592c49634aa240b0297e5/epochX/cudacpp/tput/throughputX.sh#L393 (look at exeArgs2 if it exists, else at exeArgs, for the CUDA blocks/threads).

Voila that's my full performance numbers as of today. They will still evolve (especially with split kernels etc).

I will also look at the two processes that Olivier suggested as a prrof of concept of generation.

(One final word of caution, I think I have some small functional bugs in the calculations, I will look at them. Related, or maybe independent, the different compilers start giving quite different results on ggttggg... maybe it's just the order of adding the 1000 diagrams...)

valassi commented 2 years ago

Just to put it in full, as of today:

*** FPTYPE=d ******************************************************************

Revision c2e67b4 [nvcc 11.6.55 (gcc 10.2.0)] 
HELINL=0
            eemumu      ggtt        ggttg       ggttgg      ggttggg     
CUD/none    1.35e+09    1.41e+08    1.45e+07    5.20e+05    1.18e+04    
CPP/none    1.67e+06    2.01e+05    2.48e+04    1.81e+03    7.22e+01    
CPP/sse4    3.13e+06    3.17e+05    4.54e+04    3.34e+03    1.32e+02    
CPP/avx2    5.54e+06    5.64e+05    8.86e+04    6.83e+03    2.61e+02    
CPP/512y    5.82e+06    6.15e+05    9.83e+04    7.49e+03    2.88e+02    
CPP/512z    4.65e+06    3.75e+05    7.19e+04    6.52e+03    2.94e+02    

Revision c2e67b4 [nvcc 11.6.55 (gcc 10.2.0)] 
HELINL=1
            eemumu      ggtt        ggttg       ggttgg      ggttggg     
CUD/none    1.38e+09    1.42e+08                3.85e+05                
CPP/none    4.97e+06    2.38e+05                3.91e+02                
CPP/sse4    8.95e+06    2.78e+05                2.94e+03                
CPP/avx2    1.20e+07    4.44e+05                6.09e+03                
CPP/512y    1.23e+07    4.54e+05                7.52e+03                
CPP/512z    8.28e+06    3.43e+05                6.66e+03                

Revision 4f3229d [nvcc 11.6.55 (icx 20210400, clang 13.0.0, gcc 10.2.0)] 
HELINL=0
            eemumu      ggtt        ggttg       ggttgg      ggttggg     
CUD/none    1.33e+09    1.42e+08    1.45e+07    5.14e+05    1.19e+04    
CPP/none    7.60e+06    2.15e+05    2.43e+04    1.50e+03    7.21e+01    
CPP/sse4    7.89e+06    4.45e+05    4.57e+04    2.82e+03    1.05e+02    
CPP/avx2    1.19e+07    6.93e+05    1.04e+05    7.61e+03    2.42e+02    
CPP/512y    1.19e+07    7.50e+05    1.09e+05    8.45e+03    2.74e+02    
CPP/512z    9.37e+06    5.09e+05    7.85e+04    5.85e+03    2.66e+02    

Revision 4f3229d [nvcc 11.6.55 (icx 20210400, clang 13.0.0, gcc 10.2.0)] 
HELINL=1
            eemumu      ggtt        ggttg       ggttgg      ggttggg     
CUD/none    1.33e+09    1.41e+08                3.85e+05                
CPP/none    7.71e+06    2.65e+05                1.92e+03                
CPP/sse4    7.93e+06    4.44e+05                3.77e+03                
CPP/avx2    1.19e+07    8.16e+05                1.00e+04                
CPP/512y    1.18e+07    8.64e+05                1.15e+04                
CPP/512z    9.19e+06    5.83e+05                1.09e+04                

*** FPTYPE=f ******************************************************************

Revision c2e67b4 [nvcc 11.6.55 (gcc 10.2.0)] 
HELINL=0
            eemumu      ggtt        ggttg       ggttgg      ggttggg     
CUD/none    3.26e+09    3.79e+08    4.75e+07    9.71e+05    2.66e+04    
CPP/none    1.72e+06    2.06e+05    2.50e+04    1.87e+03    7.67e+01    
CPP/sse4    6.14e+06    4.80e+05    8.33e+04    6.97e+03    2.87e+02    
CPP/avx2    1.15e+07    1.04e+06    1.75e+05    1.36e+04    5.21e+02    
CPP/512y    1.21e+07    1.10e+06    1.85e+05    1.48e+04    5.66e+02    
CPP/512z    9.32e+06    7.64e+05    1.47e+05    1.30e+04    5.81e+02    

Revision c2e67b4 [nvcc 11.6.55 (gcc 10.2.0)] 
HELINL=1
            eemumu      ggtt        ggttg       ggttgg      ggttggg     
CUD/none    3.23e+09    3.80e+08                7.48e+05                
CPP/none    1.22e+07    2.48e+05                4.74e+02                
CPP/sse4    1.80e+07    5.40e+05                6.75e+03                
CPP/avx2    2.53e+07    7.02e+05                1.19e+04                
CPP/512y    2.61e+07    7.15e+05                1.47e+04                
CPP/512z    1.74e+07    5.63e+05                1.30e+04                

Revision 4f3229d [nvcc 11.6.55 (icx 20210400, clang 13.0.0, gcc 10.2.0)] 
HELINL=0
            eemumu      ggtt        ggttg       ggttgg      ggttggg     
CUD/none    3.24e+09    3.79e+08    4.71e+07    9.66e+05    2.67e+04    
CPP/none    3.52e+06    2.07e+05    2.52e+04    1.84e+03    7.21e+01    
CPP/sse4    1.35e+07    6.88e+05    9.48e+04    6.12e+03    2.49e+02    
CPP/avx2    2.55e+07    1.12e+06    1.46e+05    1.36e+04    5.08e+02    
CPP/512y    2.57e+07    1.38e+06    2.10e+05    1.69e+04    5.22e+02    
CPP/512z    2.33e+07    5.70e+05    1.21e+05    1.22e+04    5.30e+02    

Revision 4f3229d [nvcc 11.6.55 (icx 20210400, clang 13.0.0, gcc 10.2.0)] 
HELINL=1
            eemumu      ggtt        ggttg       ggttgg      ggttggg     
CUD/none    3.25e+09    3.81e+08                7.49e+05                
CPP/none    3.58e+06    2.57e+05                2.47e+03                
CPP/sse4    1.40e+07    8.31e+05                8.59e+03                
CPP/avx2    2.57e+07    1.41e+06                1.96e+04                
CPP/512y    2.67e+07    1.55e+06                2.26e+04                
CPP/512z    2.33e+07    6.07e+05                2.01e+04                
valassi commented 2 years ago

About uudd generation, I opened #349. I just completed PR #350, which I am about to merge.

The heft generation is a bit more tricky, I will create a separate PR.

valassi commented 2 years ago

I have opened issue #351 about the heft code generation, and a WIP PR #352. There are a few fundamental issues to discuss with Olivier first there (should the base plugin put the Higgs mass to cIPD so that it gets to constant memory?).

valassi commented 2 years ago

Hi @oliviermattelaer about the EFT Higgs in #351, in the end I have a physics question! I had two build problems

So my questions:

For the moment I will assume that I can simply remove both the helicity and mass arguments, and modify sxxxx an dthe rest accordingly, then submit a PR to review. But let me know please! Thanks Andrea

oliviermattelaer commented 2 years ago
* can you confirm that I can remove the helicity argument from the sxxxx function? it is there unused in c++, it is absent in fortran

Yes this can be technically removed. Keeping it might be easier for the code generation but it's just an if statement.

* can you confirm that also the mass has nothing to do there and can be removed as an argument from the sxxx function? again it is there unused in c++, it is absent in fortran... if I can remove this, my initial problem also trivially dosappears, because I no longer need to pass  `Parameters_heft::getInstance()->mdl_MH` at all!

Yes this is correct. (Note that MH might be needed due to the coupling/propagator but indeed not from initial state) However, this type of need exists for other processes for g g > t t~, you do depend of the top mass too and this one enters the ixxxx and oxxx So you need to be able to access mass for such type of routine, how do you handle those?

* just to be sure, can you confirm that the rest looks right, ie this calculation of EFT gg>h should not depend on the momentum of the Higgs? (seems to make sense, the momentum is zero always in the center of mass of the Higgs?)

Yes this is correct for this computation.

valassi commented 2 years ago

Hi Olivier, thanks! Ok so I will remove those two arguments. About the mass in ggtt, it is working out of the box. Everything which is used there gets translated to cIPC/cIPD and ends up in constant cuda memory (or static C++ memory). I probably changed something over time, but definitely it was originally your code (I am not even sure why they are called cIPC and cIPD!). Andrea

valassi commented 2 years ago

Hi @oliviermattelaer again, next physics question! In #358. I get a build warning from rambo, which makes me think that maybe a 2->1 process like gg>h is not a good example for this exercise (do we need phase space sampling at all)? Are we not repeating always the same ME calculation with the same momenta, indepndently of random numbers? For the moment I would just ignore the warning anyway... let me know if you have other suggestions. (Anyway this was very useful to find other issues in the code!). Thanks Andrea

oliviermattelaer commented 2 years ago

Hi,

Yes indeed 2>1 process are special for the integration. This is not the main point of this check but rather to check the case with scalar and with non SM model

Cheers,

Olivier

On 26 Jan 2022, at 16:22, Andrea Valassi @.**@.>> wrote:

Hi @oliviermattelaerhttps://github.com/oliviermattelaer again, next physics question! In #358https://github.com/madgraph5/madgraph4gpu/issues/358. I get a build warning from rambo, which makes me think that maybe a 2->1 process like gg>h is not a good example for this exercise (do we need phase space sampling at all)? Are we not repeating always the same ME calculation with the same momenta, indepndently of random numbers? For the moment I would just ignore the warning anyway... let me know if you have other suggestions. (Anyway this was very useful to find other issues in the code!). Thanks Andrea

— Reply to this email directly, view it on GitHubhttps://github.com/madgraph5/madgraph4gpu/issues/344#issuecomment-1022303246, or unsubscribehttps://github.com/notifications/unsubscribe-auth/AH6535UXMQ5Z5FUWPGGNBKLUYAGRVANCNFSM5MWLLQPA. Triage notifications on the go with GitHub Mobile for iOShttps://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675 or Androidhttps://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub. You are receiving this because you were mentioned.Message ID: @.***>

valassi commented 2 years ago

Hi Olivier, ok very good, then I will just keep the warning in the code and check that the ME generation works (indeed it does now). Thanks!

valassi commented 1 year ago

I am not sure this issue is the best placed, but since it is open I will add these comments here. I just want to give an overview of the processes we already have and the ones we should be adding, and why.

Currently we have these 7 SA and 6 MAD processes for cudacpp

What I would like to add includes

Much lower priority, but eventually relevant for performance tests (runtime AND build speed!):\

Comments welcome...

cc @oliviermattelaer @roiser @zeniheisser @hageboeck @whhopkins @jtchilders @nscottnichols