TopEFT / topeft

15 stars 24 forks source link

Correction lib integration for MUO, EGM, BTV, LUM, TAU POGs in topeft repo #424

Closed anpicci closed 2 months ago

anpicci commented 4 months ago

This PR is part of the issue #393

Run2 SFs and systematics from MUO, EGM, BTV, LUM, TAU POGs (except for Rochester corrections and tau energy scale) are integrated and can be easily extended to Run3 datasets.

JME corrections and systematics will be implemented with another PR

anpicci commented 4 months ago

@bryates There are some conflicts because I worked with the old analysis_processor version, where we used hist instead of HistEFT. Should I solve the conflict by hand updating the analysis_processor to the updated one, and eventually produce some pkl files?

bryates commented 4 months ago

@bryates There are some conflicts because I worked with the old analysis_processor version, where we used hist instead of HistEFT. Should I solve the conflict by hand updating the analysis_processor to the updated one, and eventually produce some pkl files?

Yes, please do that. If you merge in the latest master changes, you can run git mergetool, or just edit each file.

bryates commented 4 months ago

Thanks @anpicci. I did a quick pass, and it's mostly just unused/commented code.

Please also fix the CI lint errors https://github.com/TopEFT/topeft/actions/runs/9877629748/job/27279701307?pr=424#step:4:34

bryates commented 4 months ago

@anpicci I updated the CI to checkout correction_lib_integration in TopCoffea. We'll have to remove this once that PR is merged.

bryates commented 4 months ago

@anpicci I updated the CI to checkout correction_lib_integration in TopCoffea. We'll have to remove this once that PR is merged.

Looks like it also relies on the tau SFs branch https://github.com/TopEFT/topeft/actions/runs/9894611635/job/27332518943?pr=424#step:12:128 Can we disentangle these two PRs?

bryates commented 4 months ago

@anpicci on a local test, I'm seeing these differences

    3l_m_offZ_1b
        1.9671919472782152e-05 -> NOTE: This is larger than tolerance (1e-05)!
    3l_m_offZ_2b
        2.490733390872451e-05 -> NOTE: This is larger than tolerance (1e-05)!
    3l_onZ_1b
        -5.526484802731148e-07
    3l_onZ_2b
        4.4347203604629454e-05 -> NOTE: This is larger than tolerance (1e-05)!
    3l_p_offZ_1b
        1.9286771668061767e-05 -> NOTE: This is larger than tolerance (1e-05)!
    3l_p_offZ_2b
        3.25529919424271e-05 -> NOTE: This is larger than tolerance (1e-05)!

We'll need to understand if these are expected.

bryates commented 4 months ago

Sorry for the mess in the commit history. I was trying to get the CI to rebase with the TauSF_update branch, but it couldn't handle it, so I reverted the changes.

anpicci commented 4 months ago

@bryates , ok, when I compared the single SFs, the agreement was at a smaller order of magnitude. If you like, I can rerun your local tests to track down the origin of this discrepancy

bryates commented 4 months ago

@bryates , ok, when I compared the single SFs, the agreement was at a smaller order of magnitude. If you like, I can rerun your local tests to track down the origin of this discrepancy

@anpicci can you try these commands:

python -i tests/test_futures.py
test_topcoffea()
test_nonprompt()

followed by

python -i tests/test_yields.py
test_make_yields_after_processor()
test_compare_yields_after_processor()
anpicci commented 4 months ago

@bryates I got this error message:

Traceback (most recent call last):
  File "/afs/crc.nd.edu/user/a/apiccine/miniconda3/envs/clib-env/lib/python3.9/site-packages/uproot/source/file.py", line 112, in _open
    self._file = numpy.memmap(self._file_path, dtype=self._dtype, mode="r")
  File "/afs/crc.nd.edu/user/a/apiccine/miniconda3/envs/clib-env/lib/python3.9/site-packages/numpy/core/memmap.py", line 228, in __new__
    f_ctx = open(os_fspath(filename), ('r' if mode == 'c' else mode)+'b')
FileNotFoundError: [Errno 2] No such file or directory: 'ttHJet_UL17_R1B14_NAOD-00000_10194_NDSkim.root'

should I copy it locally? and from where?

bryates commented 4 months ago

@bryates I got this error message:

Traceback (most recent call last):
  File "/afs/crc.nd.edu/user/a/apiccine/miniconda3/envs/clib-env/lib/python3.9/site-packages/uproot/source/file.py", line 112, in _open
    self._file = numpy.memmap(self._file_path, dtype=self._dtype, mode="r")
  File "/afs/crc.nd.edu/user/a/apiccine/miniconda3/envs/clib-env/lib/python3.9/site-packages/numpy/core/memmap.py", line 228, in __new__
    f_ctx = open(os_fspath(filename), ('r' if mode == 'c' else mode)+'b')
FileNotFoundError: [Errno 2] No such file or directory: 'ttHJet_UL17_R1B14_NAOD-00000_10194_NDSkim.root'

should I copy it locally? and from where?

Yes, I forgot to mention that. Here's the command in the CI wget --no-verbose http://www.crc.nd.edu/~kmohrman/files/root_files/for_ci/ttHJet_UL17_R1B14_NAOD-00000_10194_NDSkim.root

anpicci commented 4 months ago

@bryates for completeness, I have tried to run these validations with the original correction.py version from master, and I get this message:

--- Percent diff between New yields and Ref yields ---

ttH
    2lss_p
    -7.394431781394749e-07
    2lss_m
    -3.396777884544722e-07
    2lss_4t_p
    -1.9873077706415647e-07
    2lss_4t_m
    -2.7204067218577144e-06
    3l_p_offZ_1b
    -8.510618289643899e-07
    3l_m_offZ_1b
    -9.548811077347536e-07
    3l_p_offZ_2b
    -9.373651329031803e-07
    3l_m_offZ_2b
    -5.783790474254879e-07
    3l_onZ_1b
    -5.529865063293836e-07
    3l_onZ_2b
    1.1918329261222232e-06
    4l
    -1.2778734472866508e-07
    2lss_CRflip
    -8.362561083229537e-07
    2lss_CR
    8.356323487057773e-08
    3l_CR
    -3.3995245031180947e-07
    2los_CRtt
    -2.777029075991915e-06
    2los_CRZ
    -1.4472256824235e-06

even though they are not classified as worrying, are they expected?

bryates commented 4 months ago

@bryates for completeness, I have tried to run these validations with the original correction.py version from master, and I get this message:

--- Percent diff between New yields and Ref yields ---

ttH
    2lss_p
  -7.394431781394749e-07
    2lss_m
  -3.396777884544722e-07
    2lss_4t_p
  -1.9873077706415647e-07
    2lss_4t_m
  -2.7204067218577144e-06
    3l_p_offZ_1b
  -8.510618289643899e-07
    3l_m_offZ_1b
  -9.548811077347536e-07
    3l_p_offZ_2b
  -9.373651329031803e-07
    3l_m_offZ_2b
  -5.783790474254879e-07
    3l_onZ_1b
  -5.529865063293836e-07
    3l_onZ_2b
  1.1918329261222232e-06
    4l
  -1.2778734472866508e-07
    2lss_CRflip
  -8.362561083229537e-07
    2lss_CR
  8.356323487057773e-08
    3l_CR
  -3.3995245031180947e-07
    2los_CRtt
  -2.777029075991915e-06
    2los_CRZ
  -1.4472256824235e-06

even though they are not classified as worrying, are they expected?

Yes, we have a minimum requirement of 1e-5. Anything below that is normal. The processor has a bit of randomness, plus the finite precision of float, which accounts for this.

anpicci commented 4 months ago

@bryates I have found out that the source of discrepancy is loose_sf in AttachMuon, i.e. the only SF I ported to correction-lib for the muon SFs, but only for the 2016 datasets (both periods): For instance:

old loose_sf [[0.9996537901739644], [0.997974218508274], [], [], [], [], [0.9986176551130582], [0.9987787678431119], [], [0.997974218508274], [], [1.0000481207016916], [0.9994687487741707], [], [0.9984503798760491], [0.9984503798760491], [0.9984988974466332, 0.997974218508274], [], [], [0.9995394426459682]]
new loose_sf [[0.999065638173131], [0.998169222364488], [], [], [], [], [0.997993752725458], [1.0], [], [1.0], [], [0.9995262787548512], [0.9991171655491334], [], [0.9975693595763792], [0.9975693595763792], [0.9982901624835292, 1.0], [], [], [0.9990074140906974]]
dif loose_sf [[-0.0005881520008333974], [0.00019500385621407101], [], [], [], [], [-0.0006239023876002392], [0.0012212321568880746], [], [0.002025781491726053], [], [-0.0005218419468403512], [-0.00035158322503725525], [], [-0.0008810202996699656], [-0.0008810202996699656], [-0.00020873496310402917, 0.002025781491726053], [], [], [-0.0005320285552707915]]

For 2018:

old loose_sf [[0.9996575188838563], [], [0.9995494401254235, 0.9993639949792276], [], [0.9989988042849607], [], [], [0.9989248158564112], [], [0.9989988042849607, 0.9989988042849607], [], [], [], [0.9989988042849607], [], [], [], [0.9994704955190001], [], []]
new loose_sf [[0.9996575188838565], [], [0.9995494401254236, 0.9993639949792276], [], [0.9989988042849608], [], [], [0.9989248158564112], [], [0.9989988042849608, 0.9989988042849608], [], [], [], [0.9989988042849608], [], [], [], [0.999470495519], [], []]
dif loose_sf [[1.1102230246251565e-16], [], [1.1102230246251565e-16, 0.0], [], [1.1102230246251565e-16], [], [], [0.0], [], [1.1102230246251565e-16, 1.1102230246251565e-16], [], [], [], [1.1102230246251565e-16], [], [], [], [-1.1102230246251565e-16], [], []]

If you want, I can further investigate why this happens for 2016

EDIT: I have found out that there are a few events also in 2017 and 2018 where SF values differs significantly, leading to the discrepancy observed in the CI checks.

bryates commented 4 months ago

@bryates I have found out that the source of discrepancy is loose_sf in AttachMuon, i.e. the only SF I ported to correction-lib for the muon SFs, but only for the 2016 datasets (both periods): For instance:

old loose_sf [[0.9996537901739644], [0.997974218508274], [], [], [], [], [0.9986176551130582], [0.9987787678431119], [], [0.997974218508274], [], [1.0000481207016916], [0.9994687487741707], [], [0.9984503798760491], [0.9984503798760491], [0.9984988974466332, 0.997974218508274], [], [], [0.9995394426459682]]
new loose_sf [[0.999065638173131], [0.998169222364488], [], [], [], [], [0.997993752725458], [1.0], [], [1.0], [], [0.9995262787548512], [0.9991171655491334], [], [0.9975693595763792], [0.9975693595763792], [0.9982901624835292, 1.0], [], [], [0.9990074140906974]]
dif loose_sf [[-0.0005881520008333974], [0.00019500385621407101], [], [], [], [], [-0.0006239023876002392], [0.0012212321568880746], [], [0.002025781491726053], [], [-0.0005218419468403512], [-0.00035158322503725525], [], [-0.0008810202996699656], [-0.0008810202996699656], [-0.00020873496310402917, 0.002025781491726053], [], [], [-0.0005320285552707915]]

For 2018:

old loose_sf [[0.9996575188838563], [], [0.9995494401254235, 0.9993639949792276], [], [0.9989988042849607], [], [], [0.9989248158564112], [], [0.9989988042849607, 0.9989988042849607], [], [], [], [0.9989988042849607], [], [], [], [0.9994704955190001], [], []]
new loose_sf [[0.9996575188838565], [], [0.9995494401254236, 0.9993639949792276], [], [0.9989988042849608], [], [], [0.9989248158564112], [], [0.9989988042849608, 0.9989988042849608], [], [], [], [0.9989988042849608], [], [], [], [0.999470495519], [], []]
dif loose_sf [[1.1102230246251565e-16], [], [1.1102230246251565e-16, 0.0], [], [1.1102230246251565e-16], [], [], [0.0], [], [1.1102230246251565e-16, 1.1102230246251565e-16], [], [], [], [1.1102230246251565e-16], [], [], [], [-1.1102230246251565e-16], [], []]

If you want, I can further investigate why this happens for 2016

EDIT: I have found out that there are a few events also in 2017 and 2018 where SF values differs significantly, leading to the discrepancy observed in the CI checks.

Ok, thanks for looking into this! If we can track down why these changed, and confirm it's due to new methods, then I think we should update the reference json files. Would that be a difficult check to do? Maybe you can see if the TOP MUO contact would know?

anpicci commented 4 months ago

Yes sure! Consider that, in principle, also the SFs for muon construction provided with correction-lib are different from the ones used in TOP-22-006, since it seems that MUO POG changed the binning and also pt interval of validity. If we contact the TOP MUO contact, we might want to cross-check also what's happening on these other SFs. What do you think?

bryates commented 4 months ago

Yes sure! Consider that, in principle, also the SFs for muon construction provided with correction-lib are different from the ones used in TOP-22-006, since it seems that MUO POG changed the binning and also pt interval of validity. If we contact the TOP MUO contact, we might want to cross-check also what's happening on these other SFs. What do you think?

Ok, if we already know they changed, then maybe that's all we need. If possible, could you try these new SFs with our current method? If that gives the same answers as this check, then we're done. The README also details how to update our reference files. Once this last check is done, please follow that.

anpicci commented 3 months ago

@bryates now it should be all set, in terms of CI checks

bryates commented 3 months ago

@bryates now it should be all set, in terms of CI checks

Great! Thanks for fixing this. So I think this PR is done, modulo the checks you're doing for the TopCoffea one. Once that is done we should be able to merge both.

anpicci commented 3 months ago

Sounds good! Thank you for helping all the time :D

anpicci commented 3 months ago

Here is a comparison between running the fit with the current master branch, and running it with the committed branch containing the correction-lib implementation.

Coefficient TOP-22-006 Best Fit Value TOP-22-006 One-Sigma Range TOP-22-006 Two-Sigma Ranges New branch Best Fit Value New branch One-Sigma Range New branch Two-Sigma Ranges New branch/TOP-22-006 2-sigma range ratio
ctlT 0.01 [-0.19, 0.19] [-0.37, 0.37] 0.01 [-0.19, 0.19] [-0.38, 0.38] 1.005
ctlS 0.05 [-1.31, 1.31] [-2.61, 2.61] -0.05 [-1.31, 1.31] [-2.62, 2.63] 1.005
cte 0.28 [-0.89, 1.11] [-1.77, 2.22] 0.28 [-0.89, 1.12] [-1.78, 2.23] 1.006
ctl 0.28 [-0.89, 1.05] [-1.79, 2.11] 0.2 [-0.90, 1.06] [-1.81, 2.12] 1.008
cQe 0.04 [-0.95, 0.98] [-1.90, 1.96] 0.04 [-0.96, 0.98] [-1.92, 1.97] 1.007
cQlM 0.52 [-0.79, 1.14] [-1.58, 2.28] 0.52 [-0.80, 1.15] [-1.60, 2.29] 1.006
cQla -0.38 [-1.41, 1.28] [-2.83, 2.57] -0.38 [-1.42, 1.28] [-2.84, 2.57] 1.002
cpt -4.05 [-5.32, 3.84] [-10.65, 7.84] -4.05 [-5.28, 3.78] [-10.57, 8.10] 1.009
cptb -0.09 [-1.64, 1.64] [-3.27, 3.27] 0.09 [-1.65, 1.65] [-3.29, 3.29] 1.005
cpQa 0.6 [-0.44, 0.99] [-0.88, 1.98] 0.6 [-0.43, 1.00] [-0.87, 1.99] 1
cbW 0.03 [-0.38, 0.38] [-0.76, 0.76] -0 [-0.39, 0.39] [-0.77, 0.76] 1.007
ctG -0.02 [-0.14, 0.12] [-0.28, 0.24] -0.02 [-0.14, 0.12] [-0.28, 0.24] 1.008
cpQM -2.58 [-3.02, 4.05] [-6.04, 8.10] -2.58 [-2.98, 4.33] [-5.97, 8.65] 1.033
ctp -3.72 [-4.43, 1.34] [-8.86, 2.68] -3.52 [-4.30, 1.52] [-8.60, 3.04] 1.008
ctZ -0.02 [-0.36, 0.32] [-0.71, 0.64] -0.02 [-0.36, 0.33] [-0.72, 0.65] 1.012
ctW -0.05 [-0.28, 0.23] [-0.55, 0.46] -0.05 [-0.28, 0.23] [-0.55, 0.46] 1.005
cQt1 0.36 [-1.17, 1.14] [-2.35, 2.29] 0.36 [-1.17, 1.14] [-2.34, 2.27] 0.993
cQt8 2.16 [-2.20, 2.50] [-4.40, 5.00] 2 [-2.18, 2.48] [-4.37, 4.96] 0.992
cQQ1 0.75 [-1.29, 1.43] [-2.58, 2.86] 0.75 [-1.28, 1.42] [-2.57, 2.84] 0.995
ctt1 -0.03 [-0.67, 0.70] [-1.34, 1.39] -0.03 [-0.67, 0.69] [-1.33, 1.38] 0.994
ctq8 -0.21 [-0.34, 0.13] [-0.68, 0.25] -0.21 [-0.34, 0.13] [-0.68, 0.25] 1.003
cQq81 -0.24 [-0.34, 0.11] [-0.68, 0.22] -0.24 [-0.34, 0.11] [-0.68, 0.22] 1.002
ctq1 -0.01 [-0.11, 0.11] [-0.21, 0.21] -0.01 [-0.11, 0.11] [-0.21, 0.21] 1.005
cQq11 0.01 [-0.09, 0.09] [-0.19, 0.19] -0 [-0.09, 0.09] [-0.19, 0.19] 1.003
cQq8a -0.01 [-0.09, 0.08] [-0.17, 0.16] -0.01 [-0.09, 0.08] [-0.17, 0.16] 1.003
cQq1a -0 [-0.04, 0.03] [-0.08, 0.07] -0 [-0.04, 0.03] [-0.08, 0.07] 1
bryates commented 3 months ago

Here is a comparison between running the fit with the current master branch, and running it with the committed branch containing the correction-lib implementation.

Coefficient TOP-22-006 Best Fit Value TOP-22-006 One-Sigma Range TOP-22-006 Two-Sigma Ranges New branch Best Fit Value New branch One-Sigma Range New branch Two-Sigma Ranges New branch/TOP-22-006 2-sigma range ratio ctlT 0.01 [-0.19, 0.19] [-0.37, 0.37] 0.01 [-0.19, 0.19] [-0.38, 0.38] 1.005 ctlS 0.05 [-1.31, 1.31] [-2.61, 2.61] -0.05 [-1.31, 1.31] [-2.62, 2.63] 1.005 cte 0.28 [-0.89, 1.11] [-1.77, 2.22] 0.28 [-0.89, 1.12] [-1.78, 2.23] 1.006 ctl 0.28 [-0.89, 1.05] [-1.79, 2.11] 0.2 [-0.90, 1.06] [-1.81, 2.12] 1.008 cQe 0.04 [-0.95, 0.98] [-1.90, 1.96] 0.04 [-0.96, 0.98] [-1.92, 1.97] 1.007 cQlM 0.52 [-0.79, 1.14] [-1.58, 2.28] 0.52 [-0.80, 1.15] [-1.60, 2.29] 1.006 cQla -0.38 [-1.41, 1.28] [-2.83, 2.57] -0.38 [-1.42, 1.28] [-2.84, 2.57] 1.002 cpt -4.05 [-5.32, 3.84] [-10.65, 7.84] -4.05 [-5.28, 3.78] [-10.57, 8.10] 1.009 cptb -0.09 [-1.64, 1.64] [-3.27, 3.27] 0.09 [-1.65, 1.65] [-3.29, 3.29] 1.005 cpQa 0.6 [-0.44, 0.99] [-0.88, 1.98] 0.6 [-0.43, 1.00] [-0.87, 1.99] 1 cbW 0.03 [-0.38, 0.38] [-0.76, 0.76] -0 [-0.39, 0.39] [-0.77, 0.76] 1.007 ctG -0.02 [-0.14, 0.12] [-0.28, 0.24] -0.02 [-0.14, 0.12] [-0.28, 0.24] 1.008 cpQM -2.58 [-3.02, 4.05] [-6.04, 8.10] -2.58 [-2.98, 4.33] [-5.97, 8.65] 1.033 ctp -3.72 [-4.43, 1.34] [-8.86, 2.68] -3.52 [-4.30, 1.52] [-8.60, 3.04] 1.008 ctZ -0.02 [-0.36, 0.32] [-0.71, 0.64] -0.02 [-0.36, 0.33] [-0.72, 0.65] 1.012 ctW -0.05 [-0.28, 0.23] [-0.55, 0.46] -0.05 [-0.28, 0.23] [-0.55, 0.46] 1.005 cQt1 0.36 [-1.17, 1.14] [-2.35, 2.29] 0.36 [-1.17, 1.14] [-2.34, 2.27] 0.993 cQt8 2.16 [-2.20, 2.50] [-4.40, 5.00] 2 [-2.18, 2.48] [-4.37, 4.96] 0.992 cQQ1 0.75 [-1.29, 1.43] [-2.58, 2.86] 0.75 [-1.28, 1.42] [-2.57, 2.84] 0.995 ctt1 -0.03 [-0.67, 0.70] [-1.34, 1.39] -0.03 [-0.67, 0.69] [-1.33, 1.38] 0.994 ctq8 -0.21 [-0.34, 0.13] [-0.68, 0.25] -0.21 [-0.34, 0.13] [-0.68, 0.25] 1.003 cQq81 -0.24 [-0.34, 0.11] [-0.68, 0.22] -0.24 [-0.34, 0.11] [-0.68, 0.22] 1.002 ctq1 -0.01 [-0.11, 0.11] [-0.21, 0.21] -0.01 [-0.11, 0.11] [-0.21, 0.21] 1.005 cQq11 0.01 [-0.09, 0.09] [-0.19, 0.19] -0 [-0.09, 0.09] [-0.19, 0.19] 1.003 cQq8a -0.01 [-0.09, 0.08] [-0.17, 0.16] -0.01 [-0.09, 0.08] [-0.17, 0.16] 1.003 cQq1a -0 [-0.04, 0.03] [-0.08, 0.07] -0 [-0.04, 0.03] [-0.08, 0.07] 1

Thanks @anpicci! This shows everything is consistent within the randomness we've observed in the past. Remind me, the "master branch" here is still using newer json files, right? Or is this test showing any new corrections have a negligible impact on the yields?

anpicci commented 3 months ago

Hi @bryates, the master branch is still using the TOP-22-006 setup after implementing HistEFT, so nothing from correction-lib, if I understood your question correctly. As a result, I would say this test shows that any new corrections have a negligible impact on the yields, together with the previous checks we have run above

bryates commented 3 months ago

Hi @bryates, the master branch is still using the TOP-22-006 setup after implementing HistEFT, so nothing from correction-lib, if I understood your question correctly. As a result, I would say this test shows that any new corrections have a negligible impact on the yields, together with the previous checks we have run above

Great! @kmohrman do you have any other tests you'd like to see before we merge this PR?

kmohrman commented 3 months ago

Sounds like these validation checks were pretty thorough. If you guys want to merge this PR that seems fine to me.

However, this PR depends on the associated topcoffea PR 35, right?

I'd like to take another look at that topcoffea PR before that one gets merged. I seem to recall there was some open discussion about the implementation of some of the systematics (and harmonization of the way that @anpicci and @MatthewDittrich were doing things) that was being discussed before things were paused for this topeft validation. So I would like to take another review of that topcoffea PR this week.

In the meantime, if you guys want to merge this topeft PR now (and for your local setups just locally checkout the appropriate topcoffea branch while we're iterate on its PR) I guess that would be possible. Or if you want to hold off on merging this topeft till the topcoffea PR is merged that also seems like it would be a reasonable option.

anpicci commented 3 months ago

@kmohrman that's correct, in that PR everything should have been addressed to harmonize @MatthewDittrich 's approach for Run3 b-tagging SFs and ours for Run2, in terms of how to recall the json files depending on the year, but I would welcome a review about that.

kmohrman commented 3 months ago

@kmohrman that's correct, in that PR everything should have been addressed to harmonize @MatthewDittrich 's approach for Run3 b-tagging SFs and ours for Run2, in terms of how to recall the json files depending on the year, but I would welcome a review about that.

Ok, thank you! Matthew and I will review it this week.

codecov[bot] commented 3 months ago

Codecov Report

Attention: Patch coverage is 1.81818% with 54 lines in your changes missing coverage. Please review.

Project coverage is 26.58%. Comparing base (9722d71) to head (faff66a). Report is 23 commits behind head on master.

Files with missing lines Patch % Lines
analysis/topeft_run2/analysis_processor.py 1.81% 54 Missing :warning:
Additional details and impacted files ```diff @@ Coverage Diff @@ ## master #424 +/- ## ========================================== - Coverage 26.83% 26.58% -0.26% ========================================== Files 31 31 Lines 4572 4612 +40 ========================================== - Hits 1227 1226 -1 - Misses 3345 3386 +41 ``` | [Flag](https://app.codecov.io/gh/TopEFT/topeft/pull/424/flags?src=pr&el=flags&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=TopEFT) | Coverage Δ | | |---|---|---| | [unittests](https://app.codecov.io/gh/TopEFT/topeft/pull/424/flags?src=pr&el=flag&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=TopEFT) | `26.58% <1.81%> (-0.26%)` | :arrow_down: | Flags with carried forward coverage won't be shown. [Click here](https://docs.codecov.io/docs/carryforward-flags?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=TopEFT#carryforward-flags-in-the-pull-request-comment) to find out more.

:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Have feedback on the report? Share it here.

bryates commented 2 months ago

Now that the TopCoffea PR was merged, we should merge this as well. I already removed the line in the CI that checked out the now merged branch. @kmohrman any last comments on this?

kmohrman commented 2 months ago

Now that the TopCoffea PR was merged, we should merge this as well. I already removed the line in the CI that checked out the now merged branch. @kmohrman any last comments on this?

Well, this topeft repo is not really within my purview anymore.. but thanks for doing the check I'd suggested anyway! That comparison (of results on master branch and on this branch) was the main thing that it seemed like it would make sense to check. So I'm glad you guys checked that looks ok. Otherwise, while I can't claim I've looked at this PR in detail, I'd say if you guys are confident, then merge away!

anpicci commented 2 months ago

@kmohrman @bryates thank you again for merging the PR while I was away! I have just restored my branch here as well, since I'm using it for the JME integration