Open swagata87 opened 10 months ago
A new Issue was created by @swagata87 Swagata Mukherjee.
@Dr15Jones, @rappoccio, @smuzaffar, @makortel, @sextonkennedy, @antoniovilela can you please review it and eventually sign/assign? Thanks.
cms-bot commands are listed here
assign hlt
New categories assigned: hlt
@mmusich,@missirol,@Martin-Grunewald you have been requested to review this Pull request/Issue and eventually sign? Thanks
@swagata87 could you please post a recipe to run the menu in the conditions you tested in 13.1.x and 13.3.x respectively
Hello Marco, For 13_1_X, I've mostly followed the HLT upgrade twiki's instructions. Let me write down explicitly what I did.
Setup CMMSW area
cmsrel CMSSW_13_1_0
cd CMSSW_13_1_0/src/
cmsenv
git cms-init
L1 related stuff
git cms-checkout-topic -u cms-l1t-offline:Phase2_prototypeSnapshot_3_CMSSW_13_1
git cms-addpkg L1Trigger/Phase2L1Taus
git cms-addpkg HLTrigger/HLTfilters
git cms-addpkg DataFormats/HLTReco
git cms-addpkg HLTrigger/HLTcore
HLT related stuff (Soham's branch on L1 EG fix is merged in a more recent 13_1_X)
git cms-merge-topic SohamBhattacharya:test_Phase2-HLT_with_L1T-layer2_13_1_0
git cms-merge-topic swagata87:hltSingleEleNonIso131X
git cms-merge-topic Sam-Harper:EGHLTCustomisation_1230pre6
scram b -j`nproc`
rereun L1
cmsDriver.py step1
--conditions auto:phase2_realistic_T21
-n 100
--era Phase2C17I13M9
--eventcontent FEVTDEBUGHLT
-s RAW2DIGI,L1:RUNP2GT
--datatier GEN-SIM-DIGI-RAW-MINIAOD
--fileout file:output_Phase2_L1T.root
--customise SLHCUpgradeSimulations/Configuration/aging.customise_aging_1000,Configuration/DataProcessing/Utils.addMonitoring,L1Trigger/Configuration/customisePhase2.addHcalTriggerPrimitives,L1Trigger/Configuration/customisePhase2FEVTDEBUGHLT.customisePhase2FEVTDEBUGHLT
--geometry Extended2026D95
--nThreads 1
--filein /store/mc/Phase2Spring23DIGIRECOMiniAOD/TT_TuneCP5_14TeV-powheg-pythia8/GEN-SIM-DIGI-RAW-MINIAOD/noPU_131X_mcRun4_realistic_v5-v1/2520000/00674431-77d6-4dce-9dad-41b6d0ff1d6f.root
--mc
--inputCommands='keep *, drop l1tPFJets_*_*_*'
--outputCommands='keep
*P2GT*_*_*_*, drop l1tPFJets_*_*_*'
--python_filename rerunL1_only_cfg.py
--no_exec
It creates a config which I ran, via crab, on this dataset:
/ZprimeToEE_M-6000_TuneCP5_14TeV-pythia8/Phase2Spring23DIGIRECOMiniAOD-PU200_Trk1GeV_131X_mcRun4_realistic_v5-v1/GEN-SIM-DIGI-RAW-MINIAOD
The output files are here (in prod/phys03):
/ZprimeToEE_M-6000_TuneCP5_14TeV-pythia8/phys_egamma-crab_L1Fixed_ph2_200PU_Zprime-0176db68d8416825a94d50997aec30fb/USER
rereun HLT (on above output)
cmsDriver.py Phase2 -s HLT:75e33
--processName=HLTX
--conditions auto:phase2_realistic_T21
--geometry Extended2026D95
--era Phase2C17I13M9
--customise SLHCUpgradeSimulations/Configuration/aging.customise_aging_1000,HLTrigger/Configuration/customizeHLTforEGamma.customiseEGammaMenuDev
--eventcontent FEVTDEBUGHLT
--filein= /store/group/phys_egamma/ec/swmukher/phase2/L1Fixed_Zprime_200PU_published/ZprimeToEE_M-6000_TuneCP5_14TeV-pythia8/crab_L1Fixed_ph2_200PU_Zprime/230911_111429/0000/output_Phase2_L1T_1.root
--inputCommands=`keep *, drop *_hlt*_*_HLT, drop triggerTriggerFilterObjectWithRefs_l1t*_*_HLT`
-n 100
--nThreads 1
--no_exec
It creates a config which I ran, via crab, on the dataset created in previous step:
/ZprimeToEE_M-6000_TuneCP5_14TeV-pythia8/phys_egamma-crab_L1Fixed_ph2_200PU_Zprime-0176db68d8416825a94d50997aec30fb/USER
The output of rerunning HLT is stored in:
/eos/cms/store/group/phys_egamma/ec/swmukher/phase2/HLT_L1Fixed_Zprime_200PU_v2/
The efficiency of HLT in 13_1_X is measured on the above files.
For 13_3_X, I've done the following. In this case L1 is not re-run, this is expected to have minimal effect as I understood from Soham.
Setup CMMSW area (that IB is chosen as it has Soham's 13_3 branch of L1 EG fix merged)
cmsrel CMSSW_13_3_X_2023-09-18-2300
cd CMSSW_13_3_X_2023-09-18-2300/src/
cmsenv
git cms-init
HLT related stuff
git cms-merge-topic swagata87:hltSingleEleNonIso
git cms-merge-topic Sam-Harper:EGHLTCustomisation_1230pre6
scram b -j`nproc`
rereun HLT (on a RelVal sample)
cmsDriver.py Phase2 -s HLT:75e33
--processName=HLTX
--conditions auto:phase2_realistic_T21
--geometry Extended2026D95
--era Phase2C17I13M9
--customise SLHCUpgradeSimulations/Configuration/aging.customise_aging_1000,HLTrigger/Configuration/customizeHLTforEGamma.customiseEGammaMenuDev
--eventcontent FEVTDEBUGHLT
--filein /store/relval/CMSSW_13_3_0_pre2/RelValZpToEE_m6000_14TeV/GEN-SIM-DIGI-RAW/PU_131X_mcRun4_realistic_v6_2026D98PU200-v1/2580000/015b1c6a-25be-4a3c-bca3-ae047eb03fbb.root
--inputCommands='keep *, drop *_hlt*_*_HLT, drop triggerTriggerFilterObjectWithRefs_l1t*_*_HLT'
-n 100
--nThreads 1
--no_exec
It creates a config which I ran, via crab, on this dataset:
/RelValZpToEE_m6000_14TeV/CMSSW_13_3_0_pre2-PU_131X_mcRun4_realistic_v6_2026D98PU200-v1/GEN-SIM-DIGI-RAW
The RelVal is D98 geometry, while I used --geometry Extended2026D95
in cmsDriver. This should not have any effect on physics as both geometries are very similar from physics point of view.
The output of rerunning HLT is stored in:
/eos/cms/store/group/phys_egamma/ec/swmukher/phase2/relval_zprime_pu_13_3_X
The efficiency of HLT in 13_3_X is measured on the above files.
type egamma
assign upgrade
New categories assigned: upgrade
@AdrianoDee,@srimanob you have been requested to review this Pull request/Issue and eventually sign? Thanks
Hi @swagata87 Thanks for the report. Do I understand correctly that the inefficiency you observed is in HLT only. Do you spot some changes also in RECO electrons?
I did not explicitly checked offline reco electrons. But that is normally checked as part of release validation, so any issue would have been / will be spotted. Tagging @cms-sw/egamma-pog-l2 in case they are aware of anything in Reco electron validation for phase2. If nothing is spotted there, then it could be a HLT-specific issue. Pixel matching algo at HLT is different than in offline.
One of the possible sources of discrepancy proposed by @swagata87 in pixel matching step is a difference in which the online BeamSpot is supplied to the event (due to e.g. https://github.com/cms-sw/cmssw/pull/41597 ). This was explicitly tested by running the recipes at https://github.com/cms-sw/cmssw/issues/42850#issuecomment-1732468495 while adding this change:
diff --git a/RecoVertex/BeamSpotProducer/plugins/BeamSpotOnlineProducer.cc b/RecoVertex/BeamSpotProducer/plugins/BeamSpotOnlineProducer.cc
index 83aa832cfa5..52550f60448 100644
--- a/RecoVertex/BeamSpotProducer/plugins/BeamSpotOnlineProducer.cc
+++ b/RecoVertex/BeamSpotProducer/plugins/BeamSpotOnlineProducer.cc
@@ -226,6 +226,9 @@ void BeamSpotOnlineProducer::produce(Event& iEvent, const EventSetup& iSetup) {
*result = aSpot;
+
+ edm::LogPrint("BeamSpotOnlineProducer")<< "Reco beamspot: " << aSpot << std::endl;
+
iEvent.put(std::move(result));
}
in both 13.1.X and 13.3.X I see:
Reco beamspot: -----------------------------------------------------
Beam Spot Data
Beam type = 2
X0 = 9.23526e-06 +/- 0.0001 [cm]
Y0 = -2.44781e-07 +/- 0.0001 [cm]
Z0 = 0.0196042 +/- 0.00952838 [cm]
Sigma Z0 = 4.25743 +/- 0.00154402 [cm]
dxdz = 0 +/- 0.0005 [radians]
dydz = 0 +/- 0.0005 [radians]
Beam width X = 0.00109274 +/- 0.000158146 [cm]
Beam width Y = 0.00053692 +/- 0.000158146 [cm]
EmittanceX = 0.00025 [cm]
EmittanceY = 0.000205 [cm]
beta-star = 20 [cm]
-----------------------------------------------------
thus I would exclude this as a possible sources of inefficiency.
Do we still need this issue?
Do we still need this issue?
I have not seen any confirmation of resolution, so I would expect yes.
Does this issue persist in 14_0_X
? If so, to be checked if #43809 makes any difference
I'm opening this issue as asked here: https://github.com/cms-sw/cmssw/pull/42819#issuecomment-1731746839
We are seeing some inefficiency in Phase2 HLT electron paths, primarily coming from pixel matching step, and this seem to affect all electron paths in 13_3_X, both in barrel and endcaps. We have checked that efficiency is fine in 13_1_X.
The inefficiency in 13_3_X was spotted in the context of adding a new electron path to the Phase2 HLT menu (PR: https://github.com/cms-sw/cmssw/pull/42819, talk: https://indico.cern.ch/event/1324703/#3-adding-single-electron-non-i). Later we found that the problem is not limited to the new path and affects all electron paths.
One set of example plots are here (left is 13_1_X and right is 13_3_X)![](https://github.com/cms-sw/cmssw/assets/5052706/32ae0413-9dd7-415a-955d-3366b3888448)
![](https://github.com/cms-sw/cmssw/assets/5052706/81fb8641-c7e7-4bda-b783-58a9cbdecb89)
Nothing has changed in the pixel matching algorithm or parameters recently, so this is a bit unexpected.