NNPDF / nnpdf

An open-source machine learning framework for global analyses of parton distributions.
https://docs.nnpdf.science/
GNU General Public License v3.0
28 stars 6 forks source link

Polarised data #905

Closed enocera closed 6 months ago

enocera commented 4 years ago

This is a list of inclusive DIS data to be implemented and included in a fit.

Experiment Reference hepdata
EMC Nucl.Phys. B328 (1989) 1 https://hepdata.net/record/ins280143
SMC Phys.Rev. D58 (1998) 112001 https://hepdata.net/record/ins471981
SMClowx Phys.Rev. D60 (1999) 072004 n/a (from paper)
E142 Phys.Rev. D54 (1996) 6620 https://hepdata.net/record/ins424108
E143 Phys.Rev. D58(1998)112003 https://hepdata.net/record/ins467140
E154 Phys.Rev.Lett. 79 (1997) 26 https://hepdata.net/record/ins443170
Phys.Lett. B405 (1997) 180 https://hepdata.net/record/ins443186
E155 Phys.Lett. B493 (2000) 19 https://hepdata.net/record/ins530798
COMPASS_P Phys.Lett. B690 (2010) 466 https://hepdata.net/record/ins843494
COMPASS_D Phys.Lett. B647 (2007) 8 https://hepdata.net/record/ins726688
COMPASS_P15 Phys.Lett. B753 (2016) 18 https://hepdata.net/record/ins1357198
COMPASS_D15 Phys.Lett. B769 (2017) 34 https://hepdata.net/record/ins1501480
HERMES97 Phys.Lett.B404(1997)383 https://www.hepdata.net/record/ins440904
HERMES Phys.Rev. D75 (2007) 012007 https://hepdata.net/record/ins726689
JLAB-E06-014 Phys.Lett. B744 (2015) 309 n/a (from paper)
JLAB-E97-103 Phys.Rev.Lett. 95 (2005) 142002 https://hepdata.net/record/ins684137
JLAB-E99-117 Phys.Rev. C70 (2004) 065207 https://hepdata.net/record/ins650244
JLAB-EG1-DVCS Phys.Rev. C90 (2014) 025212 https://hepdata.net/record/ins1292133
JLAB-E93-009 Phys.Lett. B641 (2006) 11 https://hepdata.net/record/ins717523

Whenever data are available for both the virtual photon asymmetry (A1) and the longitudinal structure function (g1), they should be implemented both. If the longitudinal spin asymmetry is provided (A||), it should be implemented as well. The transverse spin asymmetry (A and/or A2) and the transverse structure function (g2) should not be implemented. As far as I recollect, the full experimental covariance matrix is provided only for the HERMES experiment, but htis must be checked.

tgiani commented 4 years ago

Great, thanks!

juanrojochacon commented 4 years ago

Just for my curiosity @enocera @tgiani are you guys trying to resurrect the NNPDFpol fits with the new code? That would be really great! Or perhaps this is an exercise related to the lattice workshop whitepaper?

enocera commented 4 years ago

Both things.

juanrojochacon commented 4 years ago

Thanks for the confirmation, sounds very interesting.

juanrojochacon commented 4 years ago

Indeed for the polarised case we know that lattice calculations should have an impact more easily than in the better known unpolarised case

tgiani commented 4 years ago

I ve implemented COMPASS_P and COMPASS_P15. They are in the branch called "polarized", all the data in the table above are going to be implemented in this same branch. Some doubts: 1) On hepdata the data are delivered both as (x,Q^2) bins and in x bins averaged over Q^2. I ve implemented the first case, (x,Q^2) bins, is it fine? 2) For these data as far as I can tell it is provided just one total systematic, so I m implementing it as UNCORR

tgiani commented 4 years ago

Concerning point 1) the answer is probably "no" (at least looking at the old DIS fit paper, where I can see only 15 points for COMPASS, against the 44 I have from the table with (x,Q^2) bins)

enocera commented 4 years ago
1. On hepdata the data are delivered both as (x,Q^2) bins and in x bins averaged over Q^2. I ve implemented the first case, (x,Q^2) bins, is it fine?

Yes, please forget about the other format.

2. For these data as far as I can tell it is provided just one total systematic, so I m implementing it as UNCORR

Yes. As you'll notice, 95% of the data comes with one uncorrelated total systematic.

tgiani commented 4 years ago

another question: when implementing a DIS experiment in buildmaster, according to the documentation the third kinematic variable should be y whose textbook definition is y= (p\cdot q) / (k\cdot p) where p, q, k are the 4 momentum of the target, of the intermediate vector boson and of the incoming particle respectively. Then in the rest frame of the target one should have y = Q^2/(2x m k_0) whereQ^2 is the virtuality of the intermediate vector boson, x is the bjorken variable, m is the target mass and k_0 is the energy of the incoming beam. Is this what I m supposed to assign to kin[3]?

enocera commented 4 years ago

Yes, it is. As far as I can tell kin[3] (in DIS) is used for plotting purposes only, it's not crucial e.g. for FK table generation or elsewhere in the fit.

tgiani commented 4 years ago

Also, regarding JLAB-E93-009 on hep data there are 306 tables, with data corresponding to g1 and A1 for proton and deuteron targets, with beam energies of 1.6 and 5.7 GeV, given at a lot of different values of the final state invariant mass W (up to 3 GeV). Again in total there are 306 tables..which ones should I be looking at? Just the ones with the higher values of W?

enocera commented 4 years ago

You can ignore all the tables with Q^2<1 GeV^2 and with W<2 GeV. But please implement both p and d (and both g1 and F1).

tgiani commented 4 years ago

Ok, so just to be sure I m getting this right: as for JLAB-E93-009 there are data for A1 and the ratio g1/A1 for both proton and deuteron targets. There are 306 tables in total, each of them given at a fixed value of W. For p target, the tables with W>2GeV and Q2>1GeV are Tabs. 46, ..., 67 for a total od 22 tables, while for deut target we have Tabs. 227, ..., 296 for a total of 80 tables. Can you please confirm you want me to implement the data from this 80 + 22 tables? Also, how should I do the implementation? Should I consider a single dataset for all the proton tables (so a single dataset containing 22 different W values) and another one for all the deut data (so another single dataset containing 80 different W values)? Or should I implement 80 + 22 different dataset, each one with a sopecific W value? Thanks

enocera commented 4 years ago

Ok, so just to be sure I m getting this right: as for JLAB-E93-009 there are data for A1 and the ratio g1/A1

isn't it the ratio g1/F1?

for both proton and deuteron targets. There are 306 tables in total, each of them given at a fixed value of W. For p target, the tables with W>2GeV and Q2>1GeV are Tabs. 46, ..., 67 for a total od 22 tables, while for deut target we have Tabs. 227, ..., 296 for a total of 80 tables. Can you please confirm you want me to implement the data from this 80 + 22 tables?

Yes. Most of these data will be possibly removed by our default kinematic cuts; however, since we are revisiting everything, it'd be better to have all these tables implemented for future studies, when we might want to investigate the dependence upon the W cut.

Also, how should I do the implementation? Should I consider a single dataset for all the proton tables (so a single dataset containing 22 different W values) and another one for all the deut data (so another single dataset containing 80 different W values)? Or should I implement 80 + 22 different dataset, each one with a sopecific W value? Thanks

You should just have two datasets (one for the proton and one for the deuteron).

tgiani commented 4 years ago

yes sorry g1/F1 of course. Ok thanks.

tgiani commented 4 years ago

Another doubt: in some experiments there are data for A1 and g1 of the target itself, so for example of 3He. From these data, the values for A1 and g1 for the neutron are obtained (see for example JLAB-E99-117, arxiv:0405006, sect 3.F. Should I implement just the data referring to the neutron? Or also those for 3He? On hepdata there are both of them

enocera commented 4 years ago

Neutron only.

tgiani commented 4 years ago

Regardig E143: considering for example A1^p, there are four possible tables:

Same applies to the other observables presented here. Which tables should I implement? The first one or the other three?

tgiani commented 4 years ago

Similar doubt for the unnamed experiment after E154 (how should I call it ?) : there are data coming from two different spectometers, corresponding to table 3 in the paper. These are then combined in a single table, corresponding to table 4 of the paper. Again, which tables should I look at?

enocera commented 4 years ago

Regardig E143: considering for example A1^p, there are four possible tables:

* in the first one there are averaged values for A1^p in `(x,Q^2)` bins, with `Q> 1GeV`, corresponding to table 13 of the paper

* in the other three there are detailed values for  A1^p, with more possible values of `Q` also lower than 1 GeV, corresponding to table 10 of the paper

Please implement Table XIII (proton and neutron A1).

Same applies to the other observables presented here. Which tables should I implement? The first one or the other three?

Only Table XIII (neutron and proton).

enocera commented 4 years ago

Similar doubt for the unnamed experiment after E154 (how should I call it ?) :+1: there are data coming from two different spectometers, corresponding to table 3 in the paper. These are then combined in a single table, corresponding to table 4 of the paper. Again, which tables should I look at?

The experiment is only one, and should be named E154. You should implement Table 4 in 'Phys.Lett. B405 (1997) 180', which is equivalent (up to rounding errors) to Table I in Phys.Rev.Lett. 79 (1997) 26.

tgiani commented 4 years ago

Regarding HERMES, in https://journals.aps.org/prd/pdf/10.1103/PhysRevD.75.012007 the results are presented with 3 different binnings. The first has 45 bins in (x,Q^2), the second 19 and the third 15.
The second and the third are obtained from the first averaging over some of the Q^2 values. Which case should I implement? The first one with 45 bins?

tgiani commented 4 years ago

Btw this is the only one which is still missing, after this all the experiments will have been implemented