(CERN AFS support here..)
We occasionally have CMS users asking for more AFS space for running HiggsDNA. Could you perhaps document a "best practice" for your users at CERN that
does (as much as possible) not rely on conda or pip installations in their homedirectory?
since we've seen that parallel access from batch jobs will kill the AFS server, and get the user's account blocked
instead please promote the use of some suitable CVMFS area, and/or request the shared tools to be installed there
directly uses EOS for storing any input+output data (parquet files, ROOT etc)
Just to say that while the tutorial still promotes conda (i.e filling up and overloading a shared filesystem homedirectory), the HIggsDNA docs do (nowadays) indeed have alternatives:
(CERN AFS support here..) We occasionally have CMS users asking for more AFS space for running HiggsDNA. Could you perhaps document a "best practice" for your users at CERN that
conda
orpip
installations in their homedirectory?/store
If this is not possible, could you please internally escalate to CMS computing? Thanks in advance