We found that the Run-2 uBDT was trained with tuples with MuonNShared starting from 1, while PidCalib tuples use probe_Brunel_ANNTraining_MuonNShared starting from 0. This mismatch could significantly change the uBDT efficiencies performance currently under validation.
Both branches should have been generated using TupleToolANNPIDTraining, somehow with the difference of +1 offset.
The consensus with Yipeng and Phoebe is to add this offset in the PidCalib tuples' branch, and to re-apply uBDT to all samples to get the correct output scores.
Then we can update the uBDT efficiency plots for validation, and update the h->mu' misID efficiencies histograms.
We found that the Run-2 uBDT was trained with tuples with
MuonNShared
starting from 1, while PidCalib tuples useprobe_Brunel_ANNTraining_MuonNShared
starting from 0. This mismatch could significantly change the uBDT efficiencies performance currently under validation.Both branches should have been generated using
TupleToolANNPIDTraining
, somehow with the difference of +1 offset.The consensus with Yipeng and Phoebe is to add this offset in the PidCalib tuples' branch, and to re-apply uBDT to all samples to get the correct output scores.
Then we can update the uBDT efficiency plots for validation, and update the h->mu' misID efficiencies histograms.