Closed giacomomagni closed 2 years ago
We should use the convention
TYPE_DATASETNAME_OBSTYPE_EXTRA
(whereEXTRA
can be an additional specificity of the dataset) for consistency and easier manipulation.PS: I might have missed some places where this appear in the suggestions below.
Have you tried that this is working? I didn't append matching at the end, because there were some issue. But if you say this is the syntax I'm fine with it.
We should use the convention
TYPE_DATASETNAME_OBSTYPE_EXTRA
(whereEXTRA
can be an additional specificity of the dataset) for consistency and easier manipulation. PS: I might have missed some places where this appear in the suggestions below.Have you tried that this is working? I didn't append matching at the end, because there were some issue. But if you say this is the syntax I'm fine with it.
I haven't tried this yet given that I expected that this should not change anything, but let me actually do the change locally.
I haven't tried this yet given that I expected that this should not change anything, but let me actually do the change locally.
Don't worry I should have fixed, thanks in any case.
@RoyStegeman @giacomomagni
Some generic comments:
nCTEQ15_20
for the BEBCWA59
datasets.CHARM
datasets do not currently have the corresponding Yadism matching given that I was not sure which nPDF to use (we might want to rethink this later).@RoyStegeman You can implement the part that computes the covmat here:
The predictions (which contain the predictions for all replicas except replica_0
) are stored in commondata/matching/
with names prepended by MATCH_<DATASET_NAME>
.
@giacomomagni This PR needs thorough reviews in order to make sure that there are no bugs. In particular, when re-generating the coefficients, the NUTEV_DXDYNUU
and NUTEV_DXDYNUB
were also modified (I have not pushed them) which shouldn't.
@RoyStegeman You can implement the part that computes the covmat here:
Thanks, I'll take care of it once @giacomomagni is happy and this PR is closed. Currently trying to implement feature scaling to see if that makes a difference.
@giacomomagni There are still some problems with the BC data. I'll notify you when this is solved.
@Radonirinaunimi okay I'll review it as carefully as possible. Did you use the branch develop to of yadism to generate the updated coefficients? For the charm measurement there is not much we can do (except asking a confirmation to Juan)
@giacomomagni I believe everything now is fixed in terms of implementation. You can have a go now.
@Radonirinaunimi okay I'll review it as carefully as possible. Did you use the branch develop to of yadism to generate the updated coefficients?
Actually no (TBH, I totally forgot about this)! This might indeed be the reason why the NUTEV_
datasets were modified. I'd be then good as a crosscheck if you could re-generate the files for all MATCHING
and PROTONBC
in case there are other discrepancies.
Just as a note, all the PROTONBC
have to be generate at the same time otherwise the .info
file will be screwed:
nnu data proton_bc <folder_containing_grids>/grids-PROTONBC_*_MATCHING.tar.gz NNPDF40_nnlo_as_01180
okay thanks, so I'll review and update the coefficients files.
@giacomomagni Why the standard NUTEV
datasets have been update? The one in master should be correct, no (at least from the data-theory comparisons)?
@giacomomagni Why the standard
NUTEV
datasets have been update? The one in master should be correct, no (at least from the data-theory comparisons)?
I'm not 100% sure why they changed.
An option might be that we never fixed them on master. The data_vs_theory
comparison did not use the coefficients tables...
All the other files were unchanged when I tried to regenerate them.
Looking at the comparison plots, nutev-nuu-master vs nutev-nnu-this-pr & nutev-nub-master vs nutev-nub-this-pr, it does not seem that something has changed. Also, by comparing the numpy arrays, they all seem to be the exact same as in master. This means that this might have changed somewhere in this PR. So, everything is then consistent.
Okay, should be good for me. Last commits were for:
* [x] reimplementation of the scripts to generate empty matching tables * [x] fix NUTEV coefficients
Before merging we need:
* [ ] cov mat for the matching tables * [ ] run a fit to see if numbers make sense
@RoyStegeman I'd agree that we should add in this PR the part that computes the CovMat and do some testings before merging this PR (in case there are fundamental bugs).
Sure
Merging this given that its main purpose (namely building the framework to generate and fit Yadism-like datasets) is achieved. Issues related to fit stability are/will be investigated in other PRs.
This PR is for the implementation of the matching to theory predictions