Open jstupak opened 2 years ago
Yes, that looks great. As I mentioned, it would be ideal if this was in the Snowmass21-software group, but this is not a huge issue.
I also just learned (in general) how easy it is to configure the CI to do various things, but have not tried it myself yet. Would it be possible to additionally run a few scripts (generateEvents.sh, testGridPacks.sh, lepMult/test.sh) and require them to finish successfully? I keep breaking one workflow when I modify another, so this would be super helpful.
I don't have permission to create new repositories in the Snowmass21-software group. Can you add me to it? Or make the fork yourself. I can give you instructions to setup the CI (secrets for DockerHub).
Regarding testing specific workflows, I already run an example for dijets. This can be extended to your scripts. The only limitation might be the allowed processing time from the free GitHub Actions tier (6 hours?).
You clearly understand this better than I do. Is it required to have this in a separate repo? This is what we do for one of my analsyes: https://gitlab.cern.ch/Hto4bLLP/Hto4bLLPAlgorithm/-/blob/r22/master/.gitlab-ci.yml. In case you aren't ATLAS, this file contains:
GIT_SUBMODULE_STRATEGY: recursive
PACKAGE_NAME: Hto4bLLPAlgorithm
SRC_DIR: src
BUILD_DIR: build
SRC_DIR_ABS: "${CI_PROJECT_DIR}/${SRC_DIR}"
BUILD_DIR_ABS: "${CI_PROJECT_DIR}/${BUILD_DIR}"
stages:
- build
- run
- hist
- plot
.build_template:
stage: build
before_script:
- echo "Project Directory ${CI_PROJECT_DIR}"
- echo "Source Directory ${SRC_DIR_ABS}"
- echo "Directory Name ${SRC_DIR}"
- echo "Build Directory ${BUILD_DIR_ABS}"
- echo "Directory Name ${BUILD_DIR}"
- source /home/atlas/release_setup.sh
script:
- mkdir -p src/Hto4bLLPAlgorithm
- mv $(ls . | grep -v src) src/Hto4bLLPAlgorithm
- cp src/Hto4bLLPAlgorithm/util/CMakeLists.txt.TopLevel src/CMakeLists.txt
- mkdir build
- cd build
- cmake ../src
- make -j4
artifacts:
paths:
- build
expire_in: 1 day
build:
extends: .build_template
image: atlas/analysisbase:22.2.48
.build_latest:
extends: .build_template
image: atlas/analysisbase:latest
allow_failure: yes
build_image:
stage: build
tags:
- docker-image-build
script:
- ignore
allow_failure: yes
# Template for the run:
.run_template:
stage: run
image: atlas/analysisbase:22.2.48
before_script:
- source /home/atlas/release_setup.sh
- source build/${AnalysisBase_PLATFORM}/setup.sh
- printf $SERVICE_PASS | base64 -d | kinit $CERN_USER@CERN.CH
script:
- mkdir -p src/Hto4bLLPAlgorithm
- mv $(ls . | grep -v 'src\|build') src/Hto4bLLPAlgorithm
- mkdir run
- pwd; ls
- cd run
- pwd; ls
- ${RUN_COMMAND}
artifacts:
paths:
- run
expire_in: 1 week
allow_failure: false
dependencies:
- build
runSignal:
variables:
RUN_COMMAND: xAH_run.py --config ../src/Hto4bLLPAlgorithm/data/config_main.py --files root://eosuser.cern.ch///eos/user/j/jburzyns/public/testFilesDAOD_PHYS --scanXRD --isMC --submitDir testRunSignal direct
extends:
- .run_template
# Template for the hist:
.hist_template:
stage: hist
image: atlas/analysisbase:22.2.48
before_script:
- source /home/atlas/release_setup.sh
- source build/${AnalysisBase_PLATFORM}/setup.sh
- printf $SERVICE_PASS | base64 -d | kinit $CERN_USER@CERN.CH
script:
- mkdir -p src/Hto4bLLPAlgorithm
- mv $(ls . | grep -v 'src\|build\|run') src/Hto4bLLPAlgorithm
- pwd; ls
- cd run
- pwd; ls
- ${RUN_COMMAND}
artifacts:
paths:
- run
expire_in: 1 week
allow_failure: false
runHist:
variables:
RUN_COMMAND: xAH_run.py --config ../src/Hto4bLLPAlgorithm/data/config_hist.py --files testRunSignal/data-tree/*.root --submitDir testRunHist --treeName outTree direct
extends:
- .hist_template
dependencies:
- build
- runSignal
# Template for the plot:
.plot_template:
stage: plot
image: atlas/analysisbase:22.2.48
before_script:
- source /home/atlas/release_setup.sh
- source build/${AnalysisBase_PLATFORM}/setup.sh
- printf $SERVICE_PASS | base64 -d | kinit $CERN_USER@CERN.CH
script:
- pwd; ls
- cd run
- pwd; ls
- mv testRunHist/hist-data-tree.root testRunHist/mc16_13TeV.313415.hist.root
- echo "testRunHist/mc16_13TeV.313415.hist.root" > files.txt
- mkdir plots
- export PYTHONPATH=${PYTHONPATH}:/builds/Hto4bLLP/Hto4bLLPAlgorithm/deps/plotcore/
- ${RUN_COMMAND}
artifacts:
paths:
- run
expire_in: 1 week
allow_failure: false
runPlot:
variables:
RUN_COMMAND: python3 ../deps/plotcore/share/test.py -i files.txt -o plots
extends:
- .plot_template
dependencies:
- build
- runSignal
- runHist
IIUC this creates a docker image (among other things) in the same repo.
You can also push the docker containers to the GitHub Container Registry instead of DockerHub. I'll update my fork for that. It might require less per-repo configuration.
awesome, thanks
Updates:
kkrizka/MCProd-docker
now pushes to ghcr. You don't have to setup any secrets for it, so forking my repo should be enough.The current blocker for both is a working heft
model for the Higgs sample. I want to make sure that the setup works in Docker before tagging a new release. Currently the CI is failing for the Higgs sample. I believe this is the param_card.dat
mismatch from a few months ago. Did you ever fix that in the main branch?
https://github.com/kkrizka/MCProd-docker/ ?