Closed enocera closed 7 months ago
Great, thanks!
Just for my curiosity @enocera @tgiani are you guys trying to resurrect the NNPDFpol fits with the new code? That would be really great! Or perhaps this is an exercise related to the lattice workshop whitepaper?
Both things.
Thanks for the confirmation, sounds very interesting.
Indeed for the polarised case we know that lattice calculations should have an impact more easily than in the better known unpolarised case
I ve implemented COMPASS_P and COMPASS_P15. They are in the branch called "polarized", all the data in the table above are going to be implemented in this same branch. Some doubts: 1) On hepdata the data are delivered both as (x,Q^2) bins and in x bins averaged over Q^2. I ve implemented the first case, (x,Q^2) bins, is it fine? 2) For these data as far as I can tell it is provided just one total systematic, so I m implementing it as UNCORR
Concerning point 1) the answer is probably "no" (at least looking at the old DIS fit paper, where I can see only 15 points for COMPASS, against the 44 I have from the table with (x,Q^2) bins)
1. On hepdata the data are delivered both as (x,Q^2) bins and in x bins averaged over Q^2. I ve implemented the first case, (x,Q^2) bins, is it fine?
Yes, please forget about the other format.
2. For these data as far as I can tell it is provided just one total systematic, so I m implementing it as UNCORR
Yes. As you'll notice, 95% of the data comes with one uncorrelated total systematic.
another question: when implementing a DIS experiment in buildmaster, according to the documentation the third kinematic variable should be y
whose textbook definition is
y= (p\cdot q) / (k\cdot p)
where p, q, k
are the 4 momentum of the target, of the intermediate vector boson and of the incoming particle respectively. Then in the rest frame of the target one should have
y = Q^2/(2x m k_0)
whereQ^2
is the virtuality of the intermediate vector boson, x
is the bjorken variable, m
is the target mass and k_0
is the energy of the incoming beam. Is this what I m supposed to assign to kin[3]
?
Yes, it is. As far as I can tell kin[3]
(in DIS) is used for plotting purposes only, it's not crucial e.g. for FK table generation or elsewhere in the fit.
Also, regarding JLAB-E93-009
on hep data there are 306 tables, with data corresponding to g1 and A1 for proton and deuteron targets, with beam energies of 1.6 and 5.7 GeV, given at a lot of different values of the final state invariant mass W (up to 3 GeV). Again in total there are 306 tables..which ones should I be looking at? Just the ones with the higher values of W?
You can ignore all the tables with Q^2<1 GeV^2 and with W<2 GeV. But please implement both p and d (and both g1 and F1).
Ok, so just to be sure I m getting this right:
as for JLAB-E93-009
there are data for A1
and the ratio g1/A1
for both proton and deuteron targets. There are 306 tables in total, each of them given at a fixed value of W.
For p target, the tables with W>2GeV
and Q2>1GeV
are Tabs. 46, ..., 67
for a total od 22 tables, while for deut target we have Tabs. 227, ..., 296
for a total of 80 tables. Can you please confirm you want me to implement the data from this 80 + 22 tables?
Also, how should I do the implementation? Should I consider a single dataset for all the proton tables (so a single dataset containing 22 different W values) and another one for all the deut data (so another single dataset containing 80 different W values)? Or should I implement 80 + 22 different dataset, each one with a sopecific W value? Thanks
Ok, so just to be sure I m getting this right: as for
JLAB-E93-009
there are data forA1
and the ratiog1/A1
isn't it the ratio g1/F1
?
for both proton and deuteron targets. There are 306 tables in total, each of them given at a fixed value of W. For p target, the tables with
W>2GeV
andQ2>1GeV
areTabs. 46, ..., 67
for a total od 22 tables, while for deut target we haveTabs. 227, ..., 296
for a total of 80 tables. Can you please confirm you want me to implement the data from this 80 + 22 tables?
Yes. Most of these data will be possibly removed by our default kinematic cuts; however, since we are revisiting everything, it'd be better to have all these tables implemented for future studies, when we might want to investigate the dependence upon the W cut.
Also, how should I do the implementation? Should I consider a single dataset for all the proton tables (so a single dataset containing 22 different W values) and another one for all the deut data (so another single dataset containing 80 different W values)? Or should I implement 80 + 22 different dataset, each one with a sopecific W value? Thanks
You should just have two datasets (one for the proton and one for the deuteron).
yes sorry g1/F1 of course. Ok thanks.
Another doubt: in some experiments there are data for A1 and g1 of the target itself, so for example of 3He. From these data, the values for A1 and g1 for the neutron are obtained (see for example JLAB-E99-117
, arxiv:0405006
, sect 3.F. Should I implement just the data referring to the neutron? Or also those for 3He? On hepdata there are both of them
Neutron only.
Regardig E143
: considering for example A1^p, there are four possible tables:
(x,Q^2)
bins, with Q> 1GeV
, corresponding to table 13 of the paperQ
also lower than 1 GeV, corresponding to table 10 of the paperSame applies to the other observables presented here. Which tables should I implement? The first one or the other three?
Similar doubt for the unnamed experiment after E154 (how should I call it ?) : there are data coming from two different spectometers, corresponding to table 3 in the paper. These are then combined in a single table, corresponding to table 4 of the paper. Again, which tables should I look at?
Regardig
E143
: considering for example A1^p, there are four possible tables:* in the first one there are averaged values for A1^p in `(x,Q^2)` bins, with `Q> 1GeV`, corresponding to table 13 of the paper * in the other three there are detailed values for A1^p, with more possible values of `Q` also lower than 1 GeV, corresponding to table 10 of the paper
Please implement Table XIII (proton and neutron A1).
Same applies to the other observables presented here. Which tables should I implement? The first one or the other three?
Only Table XIII (neutron and proton).
Similar doubt for the unnamed experiment after E154 (how should I call it ?) :+1: there are data coming from two different spectometers, corresponding to table 3 in the paper. These are then combined in a single table, corresponding to table 4 of the paper. Again, which tables should I look at?
The experiment is only one, and should be named E154. You should implement Table 4 in 'Phys.Lett. B405 (1997) 180', which is equivalent (up to rounding errors) to Table I in Phys.Rev.Lett. 79 (1997) 26
.
Regarding HERMES, in https://journals.aps.org/prd/pdf/10.1103/PhysRevD.75.012007 the results are presented with 3 different binnings.
The first has 45 bins in (x,Q^2), the second 19 and the third 15.
The second and the third are obtained from the first averaging over some of the Q^2 values. Which case should I implement? The first one with 45 bins?
Btw this is the only one which is still missing, after this all the experiments will have been implemented
This is a list of inclusive DIS data to be implemented and included in a fit.
Whenever data are available for both the virtual photon asymmetry (A1) and the longitudinal structure function (g1), they should be implemented both. If the longitudinal spin asymmetry is provided (A||), it should be implemented as well. The transverse spin asymmetry (Aᅩ and/or A2) and the transverse structure function (g2) should not be implemented. As far as I recollect, the full experimental covariance matrix is provided only for the HERMES experiment, but htis must be checked.