Closed dprelipcean closed 5 years ago
There has been a "feature" with the cvmfs mount on containers that the directory is not found at the first try, but doing
ls -l ls -l /cvmfs/
Before the run ensures that it is available (http://opendata.cern.ch/docs/cms-guide-for-condition-database).
Maybe check if this is the case.
These commands are already included. When running in the cms VM, my output is as expected:
$ ls -l
total 56240
-rw-r--r-- 1 cms-opendata cms-opendata 332 Aug 8 10:38 BuildFile.xml
lrwxrwxrwx 1 cms-opendata cms-opendata 53 Aug 8 10:38 FT_53_LV5_AN1 -> /cvmfs/cms-opendata-conddb.cern.ch/FT_53_LV5_AN1_RUNA
lrwxrwxrwx 1 cms-opendata cms-opendata 56 Aug 12 09:53 FT_53_LV5_AN1_RUNA.db -> /cvmfs/cms-opendata-conddb.cern.ch/FT_53_LV5_AN1_RUNA.db
-rw-r--r-- 1 cms-opendata cms-opendata 1159 Aug 8 10:38 analyzer_DoubleElectron11_rawreco.py
drwxr-xr-x 2 cms-opendata cms-opendata 4096 Aug 8 10:38 histos
drwxr-xr-x 2 cms-opendata cms-opendata 4096 Aug 8 10:38 python
-rw-r--r-- 1 cms-opendata cms-opendata 3276 Aug 8 10:38 raw_DoubleElectron11.py
-rw-r--r-- 1 cms-opendata cms-opendata 57561517 Aug 8 11:03 reco_DoubleElectron11_AOD.root
drwxr-xr-x 2 cms-opendata cms-opendata 4096 Aug 8 10:38 src
$ ls -l /cvmfs/
total 23
drwxr-xr-x 8 root root 4096 Jan 13 2014 cernvm-prod.cern.ch
drwxr-xr-x 12 989 984 4096 Jul 12 2016 cms-ib.cern.ch
drwxr-xr-x 12 989 984 4096 Dec 16 2015 cms-opendata-conddb.cern.ch
drwxr-xr-x 61 989 984 4096 Aug 29 2014 cms.cern.ch
drwxr-xr-x 3 989 984 4096 May 28 2014 cvmfs-config.cern.ch
But when running it with reana
(or just in the docker image), what I get is:
$ ls -l
total 44
-rw-r--r-- 1 cmsusr cmsusr 331 Aug 12 09:46 BuildFile.xml
-rw-r--r-- 1 cmsusr cmsusr 16435 Aug 12 09:46 DoubleMu.root
lrwxrwxrwx 1 cmsusr cmsusr 53 Aug 12 09:47 FT_53_LV5_AN1 -> /cvmfs/cms-opendata-conddb.cern.ch/FT_53_LV5_AN1_RUNA
lrwxrwxrwx 1 cmsusr cmsusr 56 Aug 12 09:47 FT_53_LV5_AN1_RUNA.db -> /cvmfs/cms-opendata-conddb.cern.ch/FT_53_LV5_AN1_RUNA.db
-rw-r--r-- 1 cmsusr cmsusr 2560 Aug 12 09:46 README.md
drwxr-xr-x 2 cmsusr cmsusr 4096 Aug 12 09:46 datasets
-rw-r--r-- 1 cmsusr cmsusr 3589 Aug 12 09:46 demoanalyzer_cfg.py
drwxr-xr-x 2 cmsusr cmsusr 4096 Aug 12 09:46 python
drwxr-xr-x 2 cmsusr cmsusr 4096 Aug 12 09:46 src
$ ls -l /cvmfs/
total 0
If you just run the docker image, CVMFS is not mounted, and therefore there's only an empty directory. If you mount CVMFS via reana
(in particular cms-opendata-conddb.cern.ch
), you should see a directory structure though. From looking at your config it seems you do not mount CVMFS.
From looking at your config it seems you do not mount CVMFS.
Not sure what do you mean here, as cms-opendata-conddb.cern.ch
is specified in the resources of reana.yaml
.
@diegodelemos When inspecting the kubeberntes pod I see that it should mount cvmfs
, but I don't think it does so.
$ kubectl describe pod batch-cwl-d32746b5-e5ec-40dc-bc5d-1dca47aa1bef-t5qnx
Environment:
SHARED_VOLUME_PATH: /var/reana
REANA_USER_ID: 00000000-0000-0000-0000-000000000000
REANA_MOUNT_CVMFS: ['cms-opendata-conddb.cern.ch']
JOB_CONTROLLER_SERVICE_PORT_HTTP: 5000
JOB_CONTROLLER_SERVICE_HOST: localhost
Mounts:
/var/reana/users/00000000-0000-0000-0000-000000000000/workflows/d32746b5-e5ec-40dc-bc5d-1dca47aa1bef from reana-shared-volume (rw,path="users/00000000-0000-0000-0000-000000000000/workflows/d32746b5-e5ec-40dc-bc5d-1dca47aa1bef")
/var/run/secrets/kubernetes.io/serviceaccount from default-token-2szhg (ro)
Where should we look for more info?
Update: In the config
file of reana-commons, there was no cms-opendata-conddb.cern.ch
.
Has been fixed for local testing by this pr., closing atm.
What is needed
I am dealing with the problem mentioned by @katilp in #2:
and clarified by @tiborsimko
From the files #3, I am using the reana resources as:
but still get the following error:
Two approaches
1. Local
Note: This is just a "workaround', as production will require the driver approach.
I've tried to mount
cvmfs
(already installed on the machine) directly on minikube, e.g.:In this case, the
reana.yaml
file does not specify resources.2. Using the driver
When specifying resources, I get a
kubernetes
PrivateVolumeClaim
error. Here is the info: The pods of interest are those spawned for this workflow, namely:The workflow pad correctly indicates that the cvmfs volume has to be mounted:
But then the next spawned pod is pending as there's an
PersistentVolumeClaim
:This pending pod makes my workflow look as it's working to infinity and beyond: