Closed tgiani closed 6 months ago
Attention: Patch coverage is 0%
with 72 lines
in your changes are missing coverage. Please review.
Project coverage is 41.95%. Comparing base (
2f13519
) to head (46ff994
). Report is 11 commits behind head on main.:exclamation: Current head 46ff994 differs from pull request most recent head 9434bc3. Consider uploading reports for the commit 9434bc3 to get more accurate results
@jacoterh @giacomomagni There is still some cleaning to be done and I should add some tests and update the docs, but if you have time please start having a look.
The idea is that you have a new runcard runcards/projection.yaml
where you specify the info needed to build the projection and then you can use
smefit PROJ runcards/projection.yaml -r 0.1
where 0.1
is the factor you want to use to reduce the stat error. In the runcard you have to specify the datasets for which you want to build projections and and the values of the wilson coeff you want to use for the central values (in case you want to build central values which are not sm-like). The code will create a new folder projection
where the new dataset is saved.
Hi @tgiani, thanks a lot for this. The code runs for me, so that's great! Two questions:
Hi @jacoterh concerning your questions:
Experiments which only provide the full cov mat, and do not provide separation between statistical and systematic errors, are not amenable for projections. So we can just forget about them. We can only apply our projection strategy to datasets that provide the explicit breakdown between systematic and statistical errors.
The pseudo-data is generated assuming the SM. For this reason, the central value of the generated pseudo-data will be different from the one of the original measurements, first because it is based on theory, second because a layer of stat fluctuations are added on top. @tgiani can confirm!
Hi @jacoterh 1) just as @juanrojochacon said. If you use the code with one of these dataset you ll just get back something which has again stat uncertainty equal to 0 (since it is included in the sys part, as you said) 2) again as @juanrojochacon said. The cv are by default generated using the SM prediction in the theory tables, so if you put all the WCs to 0 (or you don t specify them in the runcard) you get the sm. Which dataset have you used to test?
Thanks @juanrojochacon and @tgiani, that clarifies my questions! I do retrieve the SM predictions with all WCs to zero and in the absence of stat. unc. I just did not realise soon enough that it generates the pseudo data starting from the theory. All good!
Some suggestions that came to mind while working with a scaled up version:
-r 0.1
should be updated to handle non-uniform rescalings. At HL-LHC, the luminosity is 3 ab^-1 for all datasets, while not all datasets are taken at the same lumi before the projection. So we need different rescaling factors depending on the original lumi.smefit PROJ -lumi <x>
. The rescaling factor used in 1 will be computed on the fly now and is dataset specific.Some comments
Ready to merge @tgiani @giacomomagni
looks good to me
This PR should allow the user to create projection starting from an existing dataset. The central value should be given either by the sm prediction or by sm + a set of EFT correction specified by the user.