Closed benkrikler closed 10 years ago
Summarized from here, I think this is it
$ ./rootana -i tree.root -o out.root -m configurations/pedestalsandnoise_calib.cfg -c
$ scripts/merge_calibration_database.py calibration.db calib.run.PedestalAndNoise.csv
$ ./rootana -i tree.root -o out.root -m configurations/timeoffset_calib.cfg
$ scripts/merge_calibration_database.py calibration.db calib.run.CoarseTimeOffset.csv
The last one takes a while. You can disable channels by modifying timeoffset_calib.cfg.
For the most part I think we've all been using the same calib DBs, but I'm not against having one place. If you run it all I'll use it. If it's going t0o slow for you I can help run. We just need to make sure to save all of the CSV files until we're certain everything is okay.
Thanks John. I'm more interested in the run scripts though. I read through the wiki on the production scripts and our previous exchanges about setting up the DBs and running, but I'm still not very sure what commands I need to issue to get running over a particular dataset with the scripts. I could write this all in a bash script easily, but if we've got your production scripts then it would be nice to use them since they already take care of everything and in a very nice way. A simple bullet-point walk-through would be really useful.
AlcapDAQ/analyzer/batch/jobscripts/run_production.py --production=rootana --new=pedcalib --version=3 --modules=~/AlcapDAQ/analyzer/rootana/configurations/pedestalsandnoise_calib.cfg --database=~/production.db --dataset=Al100 --calib
AlcapDAQ/analyzer/batch/jobscripts/run_production.py --production=rootana --modules=~/AlcapDAQ/analyzer/rootana/configurations/pedestalsandnoise_calib.cfg --database=~/production.db --dataset=Al50b --calib
AlcapDAQ/analyzer/batch/jobscripts/run_production.py --production=rootana --modules=~/AlcapDAQ/analyzer/rootana/configurations/pedestalsandnoise_calib.cfg --database=~/production.db --dataset=Al50awithNDet2 --calib
AlcapDAQ/analyzer/rootana/scripts/merge_calibration_database.py ~/calibration.db ~/calib.run0????.PedestalAndNoise.csv
AlcapDAQ/analyzer/batch/jobscripts/run_production.py --production=rootana --new=timecalib --version=3 --modules=~/AlcapDAQ/analyzer/rootana/configurations/timeoffset_calib.cfg --database=~/production.db --dataset=Al100 --calib
AlcapDAQ/analyzer/batch/jobscripts/run_production.py --production=rootana --modules=~/AlcapDAQ/analyzer/rootana/configurations/timeoffset_calib.cfg --database=~/production.db --dataset=Al50b --calib
AlcapDAQ/analyzer/batch/jobscripts/run_production.py --production=rootana --modules=~/AlcapDAQ/analyzer/rootana/configurations/timeoffset_calib.cfg --database=~/production.db --dataset=Al50awithNDet2 --calib
AlcapDAQ/analyzer/rootana/scripts/merge_calibration_database.py ~/calibration.db ~/calib.run0????.CoarseTimeOffset.csv
sqlite3 ~/calibration.db "CREATE TABLE Energy(run INT, channel TEXT, gain REAL, offset REAL)"
--calib
flag.Thanks John that's really helpful. I hit problems with tilde expansion but replacing this with $HOME
seemed to do it.
Hi guys,
I'm sorry to repeat a question that's more or less been asked before, but I'm struggling to find the right set of commands to do this and it seemed daft to waste more time on it. I want to run the calibration over the active target datasets, probably for all channels, though the muon counters and SiR* channels seem the most important.
How can I use the run scripts to just set things running through all runs in that dataset? Just the basic commands to set up the run db and launch the jobs would be a help.
For what it's worth, could we just calibrate all channels for all runs now, and then just have one standard, canonical calibration DB?