Open matthewfeickert opened 3 weeks ago
@alexander-held My guess is that the list that @eguiraud determined in https://github.com/iris-hep/analysis-grand-challenge/issues/144#issue-1715999864 has changed since then, but this PR currently just implements the requirements described in https://github.com/iris-hep/analysis-grand-challenge/issues/199#issue-1886192092 but I assume there will be more that we will need to test with.
@alexander-held Can the analyses/cms-open-data-ttbar/requirements.txt
be removed, or is that important to retain for some reason that won't use pixi
?
Okay, I'll want to rebase this to get it into a single commit before merge, but to run the analyses/cms-open-data-ttbar/ttbar_analysis_pipeline.ipynb
on the CMS open data coffea-casa with the coffea
v0.7
image you just need to do (post cloning this branch)
pixi run install-ipykernel
and then you're good to go as that will also properly install the environment you need (making sure that you select the cms-open-data-ttbar
kernel in the notebook).
@matthewfeickert yes let's remove the requirements.txt
, I can't think of anything depending on it at the moment. If it causes problems down the line we can add something like that back again.
@alexander-held @oshadura I've managed to get the environment to solve but I need help debugging some issues testing it:
USE_SERVICEX = False
:### GLOBAL CONFIGURATION
# input files per process, set to e.g. 10 (smaller number = faster)
N_FILES_MAX_PER_SAMPLE = 5
# enable Dask
USE_DASK = True
# enable ServiceX
USE_SERVICEX = False
### ML-INFERENCE SETTINGS
# enable ML inference
USE_INFERENCE = True
# enable inference using NVIDIA Triton server
USE_TRITON = False
during the "`Execute the data delivery pipeline" cell of the notebook things fail with the following
which seems to indicate that the existence of the servicex
library being installed in my environment is causing other problems regardless of what the steering shell variables are (if I uninstall servicex
and leave everything else in the environment the same, then I'm able to run without errors).
A follow up question: Is there an analysis facility where the CMS ttbar open data workflow has been run with USE_SERVICEX=True
and things worked? If so, I can try to diff the environment there in comparison to what I have given that
$ git grep --name-only "USE_SERVICEX"
analyses/cms-open-data-ttbar/ttbar_analysis_pipeline.ipynb
analyses/cms-open-data-ttbar/ttbar_analysis_pipeline.py
analyses/cms-open-data-ttbar/utils/metrics.py
docs/facilityinstructions.rst
isn't particularly deep.
Now that #225 is merged, we can target the v3 API of the ServiceX frontend.
What ServiceX instance should I be targeting if I am running this on the UNL Open Data coffea-casa?
Should be https://opendataaf-servicex.servicex.coffea-opendata.casa/.
As for the other question about importing, that's with your own environment? Not sure what causes this but perhaps we can update to v3 and then debug that one.
@matthewfeickert The ServiceX instance was upgraded during the last couple of days, and now it back works. You have a config file generated for you at the facility, so you should just run the current version of a notebook without any issues.
22k lines of changes are coming from pixie.lock
?
I am not sure why we need to remove requirements.txt
? Andrea Sciaba for example was using it for his test setup, and maybe we should keep it as a backward compatibility for such a case?
I used requirements.txt to create from scratch a conda environment to run my I/O tests. I'm not familiar with pixi, but if it can be used for the exact same use case, it should be fine. Otherwise, keeping a requirements.txt might be handy.
@sciaba I agree with you :) and I was just telling Alex about your use-case
@matthewfeickert can we keep in sync both environments? https://github.com/prefix-dev/pixi/issues/1410
Now that #225 is merged, we can target the v3 API of the ServiceX frontend.
Okay, let me refactor this to use v3. That will be easier.
As for the other question about importing, that's with your own environment? Not sure what causes this but perhaps we can update to v3 and then debug that one.
@alexander-held Yes. I don't think that having a different version of the library will matter, but we'll see.
You have a config file generated for you at the facility, so you should just run the current version of a notebook without any issues.
Merci @oshadura! :pray:
22k lines of changes are coming from
pixi.lock
?
@oshadura Yes, lock files are long to begin with and this is a multi-platform and multi-environment lock file.
I would suggest not trying to keep around the old requirements.txt
as it is not something that humans are going to be able to keep updated manually (there's no information encoded RE: the respective dependencies and requirements/constraints). Installing pixi
is a pretty small ask IMO, and you can even do so on LXPLUS. Of course if this is really needed we can keep it, but I would view it as a legacy file that we don't try to maintain.
I used
requirements.txt
to create from scratch a conda environment to run my I/O tests. I'm not familiar with pixi, but if it can be used for the exact same use case, it should be fine.
@sciaba Yes, pixi
will just skip steps here and get you a conda-like environment immediately. Check out https://pixi.sh/ to get started and feel free to ping me if you have questions.
can we keep in sync both environments? prefix-dev/pixi#1410
The suggested idea in that issue is going the wrong direction (requirements.txt
-> pixi.toml
) for what we want. The pixi
manifest and lock files are multi-platfrom and multi-environment and so can not be generated by a single high-level environment file (like a requirements.txt
or environment.yml
).
When I rebase my PR I'll not remove the requirements.txt
and let people do that in a follow up PR.
I am suggesting to remove jupyterlab environment or make it optional. This is very confusing for users, especially for power users who want to test notebook / python script in the facility or particular environment where is not needed jupyterlab.
I am suggesting to remove jupyterlab environment or make it optional. This is very confusing for users, especially for power users who want to test notebook / python script in the facility or particular environment where is not needed jupyterlab.
Okay, I can refactor this into another feature + environment. Why is this confusing for users though? I would think they should be unaware of its existence.
I tried to test, and pixie run
automatically starts for me jupyterlab session in the same terminal I was running command. If you have your custom jupyterlab setup (e.g. another facility) or you just want to run .py, this is not what you expect to have a result.
I tried to test, and
pixie run
automatically starts for me jupyterlab session in the same terminal I was running command. If you have your custom jupyterlab setup (e.g. another facility) or you just want to run .py, this is not what you expect to have a result.
Oh yeah. You wouldn't use pixi run start
unless you were running locally.
@alexander-held @oshadura I've moved this out of draft and this is now ready for review. I've added notes for reviewers in the PR body, but all information should be clear from the additions to the README. If not, then I need to revise it.
(sorry, last force-with-lease
pushes were fixing typos)
@alexander-held @oshadura If you have time to review this week that would be great. I'll also note for context here that I went with the idea of having things be in a top level for the whole project, but if it would be of more interest to have each analysis be a separate pixi
project that's possible too.
We will need to remove coffea-casa part for now since we don't have a solution on how to ship the pixie environment to workers and we can try to resolve it in the next pull request.
Why is that needed? The current workers are using the same coffea-casa environment as the coffea-casa client the user drops into at pod launch, right? You didn't ship the analyses/cms-open-data-ttbar/requirements.txt
to them, right?
Already running with the new kernel, I see a version mistmatch between, client, scheduler and workers...
/home/cms-jovyan/agc-servicex/.pixi/envs/cms-open-data-ttbar/lib/python3.9/site-packages/distributed/client.py:1391: VersionMismatchWarning: Mismatched versions found +---------+----------------+----------------+---------+ | Package | Client | Scheduler | Workers | +---------+----------------+----------------+---------+ | lz4 | 4.3.3 | 4.3.2 | None | | msgpack | 1.1.0 | 1.0.6 | None | | python | 3.9.20.final.0 | 3.9.18.final.0 | None | | toolz | 1.0.0 | 0.12.0 | None | | tornado | 6.4.1 | 6.3.3 | None | +---------+----------------+----------------+---------+ warnings.warn(version_module.VersionMismatchWarning(msg[0]["warning"]))
Yes, I already evaluated that having the exact scheduler versions pinned here isn't needed. We can of course match things exactly (and I did earlier in this PR), but for runtime evaluation these differences don't seem to matter.
I tried to run locally and I see next error:
2024-11-06 14:36:45,504 - distributed.worker - WARNING - Compute Failed Key: TtbarAnalysis-5c778b8f1e703fd7fe17b7cd2972d7ed Function: TtbarAnalysis args: ((WorkItem(dataset='wjets__nominal', filename='https://xrootd-local.unl.edu:1094//store/user/AGC/nanoAOD/WJetsToLNu_TuneCUETP8M1_13TeV-amcatnloFXFX-pythia8/cmsopendata2015_wjets_20547_PU25nsData2015v1_76X_mcRun2_asymptotic_v12_ext2-v1_10000_0004.root', treename='Events', entrystart=788276, entrystop=985345, fileuuid=b'#\x96\x8fdt\x8a\x11\xed\x8e[\xa6\xef]\x81\xbe\xef', usermeta={'process': 'wjets', 'xsec': 15487.164, 'nevts': 5913030, 'variation': 'nominal'}), b'\x04"M\x18H@{\x02"\x00\x00\x00\x00\x00a\x04\x94\x00\x00a\x80\x05\x95@A\x00\x01\x00\xe7\x8c\x17cloudpickle.\x0c\x00\xf6@\x94\x8c\x14_make_skeleton_class\x94\x93\x94(\x8c\x03abc\x94\x8c\x07ABCMeta\x94\x93\x94\x8c\rTtbarAnalysis\x94\x8c\x1acoffea.processor\n\x00D\x94\x8c\x0cP\x16\x00\xf2@ABC\x94\x93\x94\x85\x94}\x94\x8c\n__module__\x94\x8c\x08__main__\x94s\x8c c4f9f7e4f41d480e87c970e516ebf57a\x94Nt\x94R\x94h\x00\x8c\x0f\xa3\x00\xf2\x15_setstate\x94\x93\x94h\x10}\x94(h\x0ch\r\x8c\x08__init__\x94h\x00\x8c\x0e\xdb\x00\xf5Tfunction\x9 kwargs: {} Exception: 'AttributeError("module \'setuptools\' has no attribute \'extern\'")'
Just to confirm, you don't see this when running locally with an environment created from analyses/cms-open-data-ttbar/requirements.txt?
We will need to remove coffea-casa part for now since we don't have a solution on how to ship the pixie environment to workers and we can try to resolve it in the next pull request.
Why is that needed? The current workers are using the same coffea-casa environment as the coffea-casa client the user drops into at pod launch, right? You didn't ship the
analyses/cms-open-data-ttbar/requirements.txt
to them, right?Already running with the new kernel, I see a version mistmatch between, client, scheduler and workers...
/home/cms-jovyan/agc-servicex/.pixi/envs/cms-open-data-ttbar/lib/python3.9/site-packages/distributed/client.py:1391: VersionMismatchWarning: Mismatched versions found +---------+----------------+----------------+---------+ | Package | Client | Scheduler | Workers | +---------+----------------+----------------+---------+ | lz4 | 4.3.3 | 4.3.2 | None | | msgpack | 1.1.0 | 1.0.6 | None | | python | 3.9.20.final.0 | 3.9.18.final.0 | None | | toolz | 1.0.0 | 0.12.0 | None | | tornado | 6.4.1 | 6.3.3 | None | +---------+----------------+----------------+---------+ warnings.warn(version_module.VersionMismatchWarning(msg[0]["warning"]))
Yes, I already evaluated that having the exact scheduler versions pinned here isn't needed. We can of course match things exactly (and I did earlier in this PR), but for runtime evaluation these differences don't seem to matter.
The version on client/scheduler and workers should be exactly the same, otherwise distributed dask usually crash (that is why we have a warning).
What is happening is that your client environment has now a different version of python (and other packages) compared to my scheduler and worker environment on coffea-casa.
I tried to run locally and I see next error:
2024-11-06 14:36:45,504 - distributed.worker - WARNING - Compute Failed Key: TtbarAnalysis-5c778b8f1e703fd7fe17b7cd2972d7ed Function: TtbarAnalysis args: ((WorkItem(dataset='wjets__nominal', filename='https://xrootd-local.unl.edu:1094//store/user/AGC/nanoAOD/WJetsToLNu_TuneCUETP8M1_13TeV-amcatnloFXFX-pythia8/cmsopendata2015_wjets_20547_PU25nsData2015v1_76X_mcRun2_asymptotic_v12_ext2-v1_10000_0004.root', treename='Events', entrystart=788276, entrystop=985345, fileuuid=b'#\x96\x8fdt\x8a\x11\xed\x8e[\xa6\xef]\x81\xbe\xef', usermeta={'process': 'wjets', 'xsec': 15487.164, 'nevts': 5913030, 'variation': 'nominal'}), b'\x04"M\x18H@{\x02"\x00\x00\x00\x00\x00a\x04\x94\x00\x00a\x80\x05\x95@A\x00\x01\x00\xe7\x8c\x17cloudpickle.\x0c\x00\xf6@\x94\x8c\x14_make_skeleton_class\x94\x93\x94(\x8c\x03abc\x94\x8c\x07ABCMeta\x94\x93\x94\x8c\rTtbarAnalysis\x94\x8c\x1acoffea.processor\n\x00D\x94\x8c\x0cP\x16\x00\xf2@ABC\x94\x93\x94\x85\x94}\x94\x8c\n__module__\x94\x8c\x08__main__\x94s\x8c c4f9f7e4f41d480e87c970e516ebf57a\x94Nt\x94R\x94h\x00\x8c\x0f\xa3\x00\xf2\x15_setstate\x94\x93\x94h\x10}\x94(h\x0ch\r\x8c\x08__init__\x94h\x00\x8c\x0e\xdb\x00\xf5Tfunction\x9 kwargs: {} Exception: 'AttributeError("module \'setuptools\' has no attribute \'extern\'")'
Just to confirm, you don't see this when running locally with an environment created from analyses/cms-open-data-ttbar/requirements.txt?
0.7.x coffea works only with "setuptools<71" and I see in your environment you have higher version:
I think honestly the main focus of this PR could help to run AGC locally, since on facility usually the environment is customized and not easy to handle in such a way (we have too many components). I would suggest dividing PR functionality on local setup and facility and to follow up on the facility setup in separate PR?
What is happening is that your client environment has now a different version of python (and other packages) compared to my scheduler and worker environment on coffea-casa.
Yes, that is why I tested it to find versions that wouldn't crash and noted the precautions section
# coffea-casa precautions: keep the drift from scheduler environment small
pandas = ">=2.1.2, <2.2.4"
lz4 = ">=4.3.2, <4.3.4"
msgpack-python = ">=1.0.6, <1.1.1"
toolz = ">=0.12.0, <1.0.1"
tornado = ">=6.3.3, <6.4.2"
but I'll just change these to not be bounds but exact versions. We can do that same with the CPython version, but as the versions differ only in the patch version (which is for security patches) the language feature set is the same across all Python 3.9.x
versions and so packages should be insensitive to what that x
in 3.9.x
is.
I would suggest dividing PR functionality on local setup and facility and to follow up on the facility setup in separate PR?
I think this already does that. This PR is just meant to give people an environment lock file that reproduces the same runtime state as the "Coffea-casa build with coffea 0.7.21/dask 2022.05.0/HTCondor and cheese" instance. It provides a more tractable way to describe the environment than the existing requirements.txt
, but in the same way that you're not using that requirements.txt
on the facility side to actually do anything this PR wouldn't get used that way either. Though to verify that the environments match the client environment in the Coffea-casa pod you need to run with that client environment in Coffea-casa.
pixi
manifest (pixi.toml
) andpixi
lockfile (pixi.lock
) to fully specify the project dependencies. This provides a multi-environment multi-platform (Linux, macOS) lockfile.latest
,cms-open-data-ttbar
, andlocal
pixi features and corresponding environments composed from the features. Thecms-open-data-ttbar
feature is designed to be compatible with the Coffea Base image which uses SemVercoffea
(Coffea-casa build with coffea 0.7.21/dask 2022.05.0/HTCondor and cheese).cms-open-data-ttbar
feature has aninstall-ipykernel
task that installs a kernel such that the pixi environment can be used on a coffea-casa instance from a notebook.start
task that will launch a jupyter lab session inside of the environment.This will also be able to support the results of PR https://github.com/iris-hep/analysis-grand-challenge/pull/225 after that PR is merged with just a few updates from
pixi
. :+1: