~$ ../build/Vrap inputE605nlo.dat 7 0.6 ; pineappl convolute test.pineappl.lz4 NNPDF40_nnlo_as_01180
NLO = 8.0375773
y = 0.6; d^2sigma/dM/dy = 8.0375773 pm 0.00033933784
Final result: 8.0375773 +/- 0.00033933784
8.0375773
LHAPDF 6.4.0 loading /usr/share/lhapdf/LHAPDF/NNPDF40_nnlo_as_01180/NNPDF40_nnlo_as_01180_0000.dat
NNPDF40_nnlo_as_01180 PDF set, member #0, version 1
bin x0 diff scale uncertainty
---+-+-+-----------+--------+--------
0 0 1 8.0491532e0 -10.60% 14.69%
Please have a look (to see whether I understood how the logs are to be stored) and check whether the level of accuracy is reasonable.
Also, fwiw, if I turn off the qqb channel at NLO (which is the only one that can go divergent) the accuracy becomes better so I guess there is some instability due to divergences.
We might want to improve on that before jumping to NNLO, using vrap as a testing ground for more complicated things that we might find if we try to interface pineappl with Matrix, MCFM or NNLOJET. I guess at this stage we can already use vrap to generate the theory predictions at the same level as with all other DY datasets so we have leeway to make a detour.
@scarlehoff thanks for doing that, it looks OK. In the case where muF == Q the contribution of the log is zero so you can simply skip filling the log grid in that case, IMO.
This is ready now @cschwan
Please have a look (to see whether I understood how the logs are to be stored) and check whether the level of accuracy is reasonable.
Also, fwiw, if I turn off the qqb channel at NLO (which is the only one that can go divergent) the accuracy becomes better so I guess there is some instability due to divergences.
We might want to improve on that before jumping to NNLO, using vrap as a testing ground for more complicated things that we might find if we try to interface pineappl with Matrix, MCFM or NNLOJET. I guess at this stage we can already use vrap to generate the theory predictions at the same level as with all other DY datasets so we have leeway to make a detour.