This is a command line tool designed to perform a high mass Higgs search analysis using the combine software. It includes options for creating datacards, combining them, and running combine on the resulting cards to produce various results including limit values and impact plots.
Step-1: Combine setup inheritted from https://cms-analysis.github.io/HiggsAnalysis-CombinedLimit/
.. code:: bash
export SCRAM_ARCH=slc7_amd64_gcc700 cmsrel CMSSW_11_3_4 cd CMSSW_11_3_4/src cmsenv git clone https://github.com/cms-analysis/HiggsAnalysis-CombinedLimit.git HiggsAnalysis/CombinedLimit cd HiggsAnalysis/CombinedLimit cd $CMSSW_BASE/src/HiggsAnalysis/CombinedLimit git fetch origin git checkout v9.0.0 scramv1 b clean; scramv1 b # always make a clean build cd $CMSSW_BASE/src bash <(curl -s https://raw.githubusercontent.com/cms-analysis/CombineHarvester/main/CombineTools/scripts/sparse-checkout-ssh.sh) scramv1 b -j 8
Step-2: Get the custom tool for datacard creation and limit computation
.. code:: bash
cd $CMSSW_BASE/src git clone git@github.com:ram1123/2l2q_limitsettingtool.git -b main
The tool is run from the command line with various options. Here is a list of command line available options:
-i, --input
str
""
-d, --is2D
int
1
-a, --append
str
""
--dry-run
store_true
-p, --parallel
store_true
-mi, --MassStartVal
int
500
-mf, --MassEndVal
int
3001
-ms, --MassStepVal
int
50
-y, --year
str
2016
-c, --ifCondor
store_true
False
-allDatacard, --allDatacard
store_true
False
ListOfDatacards.py
.-f, --fracVBF
float
-1
-b, --blind
store_false
True
-signalStrength, --signalStrength
float
0.0
-freezeParameters, --freezeParameters
str
""
r=-1,3:BTAG_resolved=-5,5:BTAG_merged=-5,5
.--log-level
logging level
logging.INFO
--log-level-roofit
RooFit level
ROOT.RooFit.WARNING
-v, --verbose
store_true
False
-date, --date
str
""
-tag, --tag
str
""
-sanityCheck, --sanity-check
store_true
False
-s, --step
str
dc
dc
, cc
, ws
, rc
, fd
, ri
, fs
, rll
, corr
, plot
, all
.-ss, --substep
int
11
Usage Example
.. code:: bash
python makeDCsandWSs.py -i HM_inputs_2018UL -y 2018 -s dc
python makeDCsandWSs.py -i HM_inputs_2018UL -y 2018 -s cc
python makeDCsandWSs.py -i HM_inputs_2018UL -y 2018 -s rc
python makeDCsandWSs.py -i HM_inputs_2018UL -y 2018 -s rc -c -p
-p
is used so that it will submit jobs in parallel for all mass points-ss
is used to specify which sub-step to run.python makeDCsandWSs.py -i HM_inputs_2018UL -y 2018 -s ri -ss 1 -c -p python makeDCsandWSs.py -i HM_inputs_2018UL -y 2018 -s ri -ss 2 -c -p python makeDCsandWSs.py -i HM_inputs_2018UL -y 2018 -s ri -ss 3 -c -p
-mi
and -mf
to specify the mass pointpython makeDCsandWSs.py -i HM_inputs_2018UL -y 2018 -s ri -ss 1 -mi 500 -mf 501
To run this tool, you will need to have the following input information:
HM_inputs_2018UL
that contains systematic information.Resolution
.templates1D
and templates2D
.CMSdata
.SigEff
.Please make sure that you have all of these directories and files available and that they are properly formatted before running the tool.
Here are some additional details to keep in mind when running this tool:
HM_inputs_*
, you should prepare 12 systematics files
((resolved, merged) (b_tagged, un-tagged, vbf_tagged) (ee, mumu)).
Now, you can just go into these .txt
files and change the value
of systematics.-a
appends a name for the cards directory. For example, -a
test will create cards_test
to store all datacards. When you run
this tool, it is better to keep the option -a
the same as -y
.
For example, in cards_2016
, cards_2017
, and cards_2018
.