Open valassi opened 3 years ago
(This is related to #5 by the way).
Having a quick update on this after a few months. The issue is always there: in single precision, there are a few Nans, example https://github.com/madgraph5/madgraph4gpu/blob/a698c62b25b3c89d0b1e9567de06e97a514b8586/epochX/cudacpp/tput/logs_eemumu_manu/log_eemumu_manu_f_inl0_hrd0.txt#L112
I had even done some minimal debugging at some point (mainly to understand how to detect "NaN" at all, when fast math is enabled! See https://github.com/madgraph5/madgraph4gpu/blob/a698c62b25b3c89d0b1e9567de06e97a514b8586/epochX/cudacpp/ee_mumu/SubProcesses/CrossSectionKernels.cc#L129
There is some interesting work to be done here, which however is largely debugging. For instance:
This is not an academic exercise. The final goal of this study is to try and understand if the matrix element calculations can be moved from double to single precision. This would mean a factor 2 speedup both in vectorized c++ (twice as many elements in SIMD vectors) and in CUDA (typically, twice as many FLOPs on Nvidia data center cards)
(This is also related to #117 where fast math first appeared..)
I have just made a small test in a PT that I am about to merge https://github.com/madgraph5/madgraph4gpu/pull/379/commits/45b7b3303d8e700b21bbf66eab4ba334b01a39e4
I have disabled fast math in eemumu and run double and float, results
As discussed in PR #211, single precision average ME is not the same for CUDA and C++ in single-precision ggttgg
See for instance https://github.com/valassi/madgraph4gpu/commit/a75ee3b6ba38d0be49f294c241a5e8b0682c84df#diff-45e40fdc2f6b7c71419c9f5e7e36267d7951e21c32488d6ecf35de3ec28ced57
In double precision, results are similar to those, butnot the same, and they are the same as each other to more digits https://github.com/valassi/madgraph4gpu/commit/33e7c04ecddb596ee7eba390f0a55435a31e6287#diff-45e40fdc2f6b7c71419c9f5e7e36267d7951e21c32488d6ecf35de3ec28ced57
Note that for eemumu, in single precision the same average ME is printed out (if I remember correctly?)
NO, I remember badly. On eemumu, on MANY more events, I get a different number of NaNs! And as a consequence also a different average ME https://github.com/madgraph5/madgraph4gpu/commit/7173757e7575bc946f27ab93ed8a121d387bbfee#diff-6716e7ab4317b4e76c92074d38021be37ad0eda68f248fb16f11e679f26114a6
So there is clearly some numerical precision to investigate also for eemumu