Closed boegel closed 3 years ago
By adding these commands to compute_image_extra.sh
, and then running sudo /usr/local/bin/run-packer
, the workernodes have access to the EESSI pilot repository:
# install CernVM-FS
sudo dnf install -y https://ecsft.cern.ch/dist/cvmfs/cvmfs-release/cvmfs-release-latest.noarch.rpm
sudo dnf install -y cvmfs
sudo dnf install -y https://github.com/EESSI/filesystem-layer/releases/download/v0.2.3/cvmfs-config-eessi-0.2.3-1.noarch.rpm
# configure CernVM-FS (no proxy, 10GB of quota for CernVM-FS cache)
sudo bash -c "echo 'CVMFS_HTTP_PROXY=DIRECT' > /etc/cvmfs/default.local"
sudo bash -c "echo 'CVMFS_QUOTA_LIMIT=10000' >> /etc/cvmfs/default.local"
sudo cvmfs_config setup
To check that it's working (after rebuilding the node image with run-packer
):
start interactive job:
srun --pty /bin/bash
source EESSI init script to set up environment:
source /cvmfs/pilot.eessi-hpc.org/latest/init/bash
check available modules:
module avail openfoam gromacs tensorflow
load a module and go!
module load GROMACS/2020.1-foss-2020a-Python-3.8.2
gmx ...
If CernVM-FS would be available on the nodes, then software repositories like EESSI could be mounted easily, which would make a CitC cluster even more like a real cluster, since it would basically come with (scientific) software (properly optimized) out like GROMACS, R, TensorFlow, OpenFOAM, etc. of the box... 😲 .
I think this can mostly be done via the
compute_image_extra.sh
script to provision nodes, so it may be just a documentation problem more than anything else (or perhaps provide a separate script that can be called fromcompute_image_extra.sh
to opt-in to it).I'll try and get this working myself, and post more info in this issue, so you can make an informed decision about the best way forward for this...