Closed AndrewLevin closed 5 years ago
Actually, given that we are storing the masks in the scurveFitTree, I think we can just re-use them rather than re-calculating them.
Also, I noticed that the upper limit of the pedestal in the fit is set to nevents
https://github.com/cms-gem-daq-project/gem-plotting-tools/blob/develop/fitting/fitScanData.py#L305
which I believe is not even correct unitwise. It should be self.calDAC2Q_m[vfat]*(256)+self.calDAC2Q_b[vfat]
, right?
Also, I noticed that the upper limit of the pedestal in the fit is set to nevents
https://github.com/cms-gem-daq-project/gem-plotting-tools/blob/develop/fitting/fitScanData.py#L305
which I believe is not even correct unitwise. It should be
self.calDAC2Q_m[vfat]*(256)+self.calDAC2Q_b[vfat]
, right?
You could try it and see how it affects the fitting routine.
import them into both the scurve analysis and the ARM DAC calibration.
also ana_scans.py
; basically this parent parser:
Should be in utils/anaoptions.py
and import
ed by the three scripts and supplied as a parent parser (see implementation in ana_scans.py
).
import them into both the scurve analysis and the ARM DAC calibration.
also
ana_scans.py
; basically this parent parser:Should be in
utils/anaoptions.py
andimport
ed by the three scripts and supplied as a parent parser (see implementation inana_scans.py
).
so do you agree with https://github.com/cms-gem-daq-project/gem-plotting-tools/issues/235#issuecomment-510004094? if so, it should be imported into just two scripts (ana_scans.py
and anaUltraScurve.py
; it is not needed in calibrateThrDac.py
)
Actually, given that we are storing the masks in the scurveFitTree, I think we can just re-use them rather than re-calculating them.
This decreases your flexibility; you have the analyzed TTree and so could change the analysis cuts at the time calibrateThrDac.py
is called; otherwise the only way to change this is to re-run the analysis with anaUltraScurve.py
and that may not be what's wanted (e.g. you'd have to re-analyze N scurve files to just say change the pedestal by 1%; this is much longer than just looping over the TTree and seeing which pedestals would fail a 1% increase in cut).
This are needed in calibrateThrDac.py
(Returning back to the issue as requested by @bdorney)
In order to apply the quality cut on the pedestal, we need to know both the effective pedestal and the total number of events taken. Currently, we only store the effective pedestal in the scurveFitTree. @bdorney has suggested, and I agree, that we should redefine the effective pedestal to be relative to the total number of events taken. Any objections?
Regarding the comment about the namespace
https://github.com/cms-gem-daq-project/gem-plotting-tools/pull/239#pullrequestreview-260841141
do you have an example?
Regarding this comment
https://github.com/cms-gem-daq-project/gem-plotting-tools/pull/239#pullrequestreview-260843255
this is in the issue actually:
"we first redesign the loading of data in the ARM DAC calibration script such that it is based on the TTree, rather than the TGraphs"
Regarding this comment
this is in the issue actually:
"we first redesign the loading of data in the ARM DAC calibration script such that it is based on the TTree, rather than the TGraphs"
some comments in the code to assist the reader would seem appropriate then.
Regarding the comment about the namespace
do you have an example?
Calling ArgumentParser.parse_args() returns a namespace object; an example prototype would be namespace.py and usage is shown in confAllChambers.py
Below I show the residuals (defined as fitted scurve_mean minus scurve_mean) for GE11-X-S-CERN-0010 ARM DAC calibration data for the head of develop and for the proposed solution of this issue. The proposed solution decreases the width of the residual distribution slightly.
HEAD of develop
Proposed change
We would like the ARM DAC calibration to fit only good data.
Brief summary of issue
Currently, the ARM DAC calibration script https://github.com/cms-gem-daq-project/gem-plotting-tools/blob/develop/macros/calibrateThrDac.py performs the ARM DAC calibration based on TGraphs loaded from the root file that is output from the scurve analysis. These TGraphs are filled with all 128 channels, not considering the channel masks, e.g.:
https://github.com/cms-gem-daq-project/gem-plotting-tools/blob/581e2414844bbba0ce5a122f4c67d7183b9d3daf/utils/scurveAlgos.py#L622-L636 So, we are fitting partially good and partially bad data, including channels which are masked as in-the-pedestal, which would affect the low ARM DAC region where problems have been reported by the QC8 team.
Proposed Solution
@bdorney has proposed that we first redesign the loading of data in the ARM DAC calibration script such that it is based on the TTree, rather than the TGraphs, that is output from the scurve analysis. Then we apply the same quality cuts to the channel fits as are currently applied in the scurve analysis. The cut values are currently configurable as
argparse
options with default values, and @bdorney proposed to move them into https://github.com/cms-gem-daq-project/gem-plotting-tools/blob/develop/utils/anaoptions.py and import them into both the scurve analysis and the ARM DAC calibration.I agree with this proposed solution.
Types of issue