Open mt82 opened 1 month ago
Hi,
What is included in "SummerShutdown"? Which summer? And what kind of data? I'm wondering if this would be useful for calibration studies.
--Mike
It is okay for the dimuon analyses for those two to be deleted.
Thanks, Jamie
Hi Mike,
That folder has cafs and calibration ntuples from the 2023 shutdown. See below for a disk usage breakdown. Calibrations take the most space with 57 TB but the cafs combined are about 20 TB. If any of them are not used it would be good to use the space for the imminent production for the first osc analysis.
Giuseppe
4.8T /pnfs/sbn/data/sbn_fd/poms_production/data/SummerShutdown/reconstructed/icaruscode_v09_78_06/offbeambnbminbias/caf_blind
644G /pnfs/sbn/data/sbn_fd/poms_production/data/SummerShutdown/reconstructed/icaruscode_v09_78_06/offbeambnbminbias/caf_prescaled
5.3T /pnfs/sbn/data/sbn_fd/poms_production/data/SummerShutdown/reconstructed/icaruscode_v09_78_06/offbeambnbminbias/caf_unblind
57T /pnfs/sbn/data/sbn_fd/poms_production/data/SummerShutdown/reconstructed/icaruscode_v09_78_06/offbeambnbminbias/calibtuples
5.0T /pnfs/sbn/data/sbn_fd/poms_production/data/SummerShutdown/reconstructed/icaruscode_v09_78_06/offbeambnbminbias/flatcaf_blind
761G /pnfs/sbn/data/sbn_fd/poms_production/data/SummerShutdown/reconstructed/icaruscode_v09_78_06/offbeambnbminbias/flatcaf_prescaled
5.5T /pnfs/sbn/data/sbn_fd/poms_production/data/SummerShutdown/reconstructed/icaruscode_v09_78_06/offbeambnbminbias/flatcaf_unblind
498G /pnfs/sbn/data/sbn_fd/poms_production/data/SummerShutdown/reconstructed/icaruscode_v09_78_06/offbeambnbminbias/online_purity_histos
13G /pnfs/sbn/data/sbn_fd/poms_production/data/SummerShutdown/reconstructed/icaruscode_v09_83_01/offbeamminbiascalib/online_purity_histos
Hi all
looking back at activities during the 2023 shutdown, I think most of those data were taken for trigger studies (adders etc.). So maybe the trigger group conveners will have to be put in the loop, I will try to talk to them later today. In any case, from the calibration WG point of view, if we want to be extra cautious, we could keep the calib tuples and get rid of CAFs and online purity histos, resulting in saving ~22T at least.
F
Hi
I just reached out to Riccardo Triozzi for the trigger group and he confirms me there's no objection from their side either. He has his own slimmed ntuples saved for those data, and in any case they were processed with an out-of-date code.
F
Dear all,
the online purity histos occupy usually a small amount of space so if possible I will avoid to cancel them... I also think that these files are not saved on tape (can we check it?).. if there is no other possible things to cancel, we can find a way to save them in a different place...
Best, Christian
icarus, keepup_production_SummerShutdown_v09_78_06_offbeambnbminbias_caf_blind, 484534, /pnfs/sbn/data/sbn_fd/poms_production/data/SummerShutdown:484075:484075
icarus, keepup_production_SummerShutdown_v09_78_06_offbeambnbminbias_caf_prescaled, 484138, /pnfs/sbn/data/sbn_fd/poms_production/data/SummerShutdown:483701:483701
icarus, keepup_production_SummerShutdown_v09_78_06_offbeambnbminbias_caf_unblind, 484312, /pnfs/sbn/data/sbn_fd/poms_production/data/SummerShutdown:483897:483897
icarus, keepup_production_SummerShutdown_v09_78_06_offbeambnbminbias_calibtuples, 484790, /pnfs/sbn/data/sbn_fd/poms_production/data/SummerShutdown:484350:484350
icarus, keepup_production_SummerShutdown_v09_78_06_offbeambnbminbias_flatcaf_blind, 484086, /pnfs/sbn/data/sbn_fd/poms_production/data/SummerShutdown:483665:483665
icarus, keepup_production_SummerShutdown_v09_78_06_offbeambnbminbias_flatcaf_prescaled, 483979, /pnfs/sbn/data/sbn_fd/poms_production/data/SummerShutdown:483587:483587
icarus, keepup_production_SummerShutdown_v09_78_06_offbeambnbminbias_flatcaf_unblind, 484030, /pnfs/sbn/data/sbn_fd/poms_production/data/SummerShutdown:483618:483618
icarus, keepup_SummerShutdown_v09_78_06_offbeambnbminbias_online_purity_histos, 453588, /pnfs/sbn/data/sbn_fd/poms_production/data/SummerShutdown:453393:453393
icarus, keepup_SummerShutdown_v09_83_01_offbeamminbiascalib_online_purity_histos, 12782, /pnfs/sbn/data/sbn_fd/poms_production/data/SummerShutdown:12782:12782
icarus, trigger_production_SummerShutdown_v09_78_06_offbeambnbminbias_caf_blind, 100529, /pnfs/sbn/data/sbn_fd/poms_production/data/SummerShutdown:100528:100528
icarus, trigger_production_SummerShutdown_v09_78_06_offbeambnbminbias_caf_prescaled, 100507, /pnfs/sbn/data/sbn_fd/poms_production/data/SummerShutdown:100506:100506
icarus, trigger_production_SummerShutdown_v09_78_06_offbeambnbminbias_caf_unblind, 100520, /pnfs/sbn/data/sbn_fd/poms_production/data/SummerShutdown:100520:100520
icarus, trigger_production_SummerShutdown_v09_78_06_offbeambnbminbias_calibtuples, 100545, /pnfs/sbn/data/sbn_fd/poms_production/data/SummerShutdown:100545:100545
icarus, trigger_production_SummerShutdown_v09_78_06_offbeambnbminbias_flatcaf_blind, 100498, /pnfs/sbn/data/sbn_fd/poms_production/data/SummerShutdown:100498:100498
icarus, trigger_production_SummerShutdown_v09_78_06_offbeambnbminbias_flatcaf_prescaled, 100493, /pnfs/sbn/data/sbn_fd/poms_production/data/SummerShutdown:100491:100491
icarus, trigger_production_SummerShutdown_v09_78_06_offbeambnbminbias_flatcaf_unblind, 100498, /pnfs/sbn/data/sbn_fd/poms_production/data/SummerShutdown:100497:100497
Move datasets
icarus, keepup_SummerShutdown_v09_78_06_offbeambnbminbias_online_purity_histos, 453588, /pnfs/sbn/data/sbn_fd/poms_production/data/SummerShutdown:453393:453393
icarus, keepup_SummerShutdown_v09_83_01_offbeamminbiascalib_online_purity_histos, 12782, /pnfs/sbn/data/sbn_fd/poms_production/data/SummerShutdown:12782:12782
on tape
I try to make it clear to myself. We proposed to delete the files in this folder:
/pnfs/sbn/data/sbn_fd/poms_production/data/SummerShutdown
In samweb, this folder corresponds to these datasets:
keepup_production_SummerShutdown_v09_78_06_offbeambnbminbias_caf_blind
keepup_production_SummerShutdown_v09_78_06_offbeambnbminbias_caf_prescaled
keepup_production_SummerShutdown_v09_78_06_offbeambnbminbias_caf_unblind
keepup_production_SummerShutdown_v09_78_06_offbeambnbminbias_calibtuples
keepup_production_SummerShutdown_v09_78_06_offbeambnbminbias_flatcaf_blind
keepup_production_SummerShutdown_v09_78_06_offbeambnbminbias_flatcaf_prescaled
keepup_production_SummerShutdown_v09_78_06_offbeambnbminbias_flatcaf_unblind
keepup_SummerShutdown_v09_78_06_offbeambnbminbias_online_purity_histos
keepup_SummerShutdown_v09_83_01_offbeamminbiascalib_online_purity_histos
trigger_production_SummerShutdown_v09_78_06_offbeambnbminbias_caf_blind
trigger_production_SummerShutdown_v09_78_06_offbeambnbminbias_caf_prescaled
trigger_production_SummerShutdown_v09_78_06_offbeambnbminbias_caf_unblind
trigger_production_SummerShutdown_v09_78_06_offbeambnbminbias_calibtuples
trigger_production_SummerShutdown_v09_78_06_offbeambnbminbias_flatcaf_blind
trigger_production_SummerShutdown_v09_78_06_offbeambnbminbias_flatcaf_prescaled
trigger_production_SummerShutdown_v09_78_06_offbeambnbminbias_flatcaf_unblind
People said maybe it is worth to copy these files on tape (at least purity monitor files) So the question was and is what is the best way to move a dataset to tape?
In addition we proposed to delete this folder:
/pnfs/sbn/data/sbn_fd/poms_production/2023A/ICARUS_BNB_plus_cosmics
This folder, if I am not wrong, hasn't any samweb dataset associated to it. So, given that maybe it is worth to move it to tape, the question was and is what is the best way to copy a folder to tape?
from @vitodb I did some test with FTS, playing a bit with the configuration we can use it for the first case, this allows to choose the destination path based on file metadata, making it quite flexible.
Checking one file in /pnfs/sbn/data/sbn_fd/poms_production/2023A/ICARUS_BNB_plus_cosmics
$ ls /pnfs/sbn/data/sbn_fd/poms_production/2023A/ICARUS_BNB_plus_cosmics/mc/reconstructed/icaruscode_v09_72_00_03p01/stage1/00/0a/
detsim_2d_icarus_detsim_stage0_stage1_68747083_0-7c6a8a5f-0e3d-4664-b5f6-d7e4d60f15f3.root
I see this is in SBN SAM instance where there are quite few files in the same dataset
[vito@icarusgpvm04 ~]$ samweb -e sbn get-metadata --json detsim_2d_icarus_detsim_stage0_stage1_68747083_0-7c6a8a5f-0e3d-4664-b5f6-d7e4d60f15f3.root | jq -r '."Dataset.Tag"'
icaruspro_production_v09_72_00_03p01_2023A_ICARUS_BNB_plus_cosmics_stage1
[~]
[vito@icarusgpvm04 ~]$ samweb -e sbn count-definition-files icaruspro_production_v09_72_00_03p01_2023A_ICARUS_BNB_plus_cosmics_stage1
7337
with FTS we should be able to copy files to the tape area and let FTS remove the files from the origin after a configurable time because files are in two different SAM instances (SBN and ICARUS) we need two FTS instances, one for each SAM instances, we already got nodes where to setup FTS for the two instances, we just need to st up and start them
to move folder to tape, is it simply:
cp -r /pnfs/sbn/data/sbn_fd/poms_production/2023A/ICARUS_BNB_plus_cosmics /pnfs/icarus/archive/sbn/sbn_fd/poms_production/2023A/ICARUS_BNB_plus_cosmics
?
the cp would make the copy of files, using the /pnfs mount point which is not that great, also this could take time to run interactively, also it would be needed to update the SAM location for those files. I'll try to setup FTS on icarusprodgpvm01, we got this node specifically to run FTS, but we didn't need it until now, as reference instruction are here: https://cdcvs.fnal.gov/redmine/projects/filetransferservice/wiki
I set up the FTS container running on icarusprodgpvm01
, I adapted the procedure from the DAQ one, the container setup/handling scripts are at ~icaruspro/FTS/FTS_config
the procedure to start the container is copying the required config files from ~icaruspro/FTS/FTS_config
to ~icaruspro/FTS/icarusprodgpvm01
the container is running
[17:56:47 ~/FTS/FTS_config]$ podman ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
86aa1b243ce0 imageregistry.fnal.gov/sam-zerg/fermifts:latest 10 minutes ago Up 10 minutes 0.0.0.0:8787->8787/tcp fts_icarusprodgpvm0
for the test I configures FTS to copy files from /exp/icarus/data/users/icaruspro/test/data/fts/dropbox/test/
to dcache:/pnfs/icarus/scratch/icaruspro/fts/test_dest
The FTS monitoring page is at: http://icarusprodgpvm01.fnal.gov:8787/fts/status (require VPN or FNAL network). FTS logs are configured to be stored in /exp/icarus/data/users/icaruspro/FTS/logs/
during the setup I figured out on the node /pnfs is not mounted, so at the moment we cannot scan a dCache folder, I'll open a ticket tomorow. If needed we can setup a similar FTS instance for SBN on the sbnprodgpvm01
as icaruspro
Hi Seth, thank you to look at this. We can add the following mount point: /pnfs/sbn
The others mounted on GPVMs are used only to access flux files and other accessory files. /pnfs/sbnd is there in case users need to access some of their file from the other experiment. For FTS we would not need any of these.
Thanks, Vito
Ok thanks, the puppet work is done. Forwarding ticket to PNFS admins to set up the exports:
pnfs-stken:/icarus pnfs-stken:/sbn
To be made available to icarusprodgpvm01.
I'm on the watch list so once that'd done I'll run the mount commands.
The mount points /pnfs/icarus /pnfs/sbn are now available on icarusprodgpvm01. We can do some basic test of FTS to check all is in a good shape, so the ticket can be closed.
1st test
mkdir -p /pnfs/icarus/scratch/users/icaruspro/fts_test/test_01/{a,b,c,d,e}/{1,2,3,4,5,6}
for l in a b c d e; do
for n in 1 2 3 4 5 6; do
cp /pnfs/sbn/data/sbn_fd/poms_production/2023A/ICARUS_BNB_plus_cosmics/mc/reconstructed/icaruscode_v09_72_00_03p01/stage1/00/0a/detsim_2d_icarus_detsim_stage0_stage1_68747083_0-7c6a8a5f-0e3d-4664-b5f6-d7e4d60f15f3.root /pnfs/icarus/scratch/users/icaruspro/fts_test/test_01/${l}/${n}/test_${l}${n}.root;
done;
done
fts.conf
[main]
experiment=icarus
log-file = /opt/fts/fts_logs/fts_${hostname}
filetypes = test
samweb-url = https://samicarus.fnal.gov:8483/sam/icarus/api
sam-web-registry-base-url = https://samicarus.fnal.gov:8483/sam_web_registry
x509-client-certificate = /opt/fts/fts_proxy/icaruspro.Production.proxy
x509-client-key = /opt/fts/fts_proxy/icaruspro.Production.proxy
local-db = /opt/fts/fts_db/${hostname}.db
enable-web-interface = True
enable-state-graph = True
web-interface-port = 8787
#allowed-web-ip = 131.225.*
transfer-limits = enstore:1
max-transfer-limit = 1
transfer-retries = 3
transfer-retry-interval = 300
scanner-queue-limit = 40000
scanner-max-limit = 250000
graphite-stats-server = fifemondata.fnal.gov:2004
service_name = fts_${hostname}
[filetype test]
scan-dirs = /storage
scan-interval = 10
scan-delay = 10
scan-file-patterns = test*.root
scan-exclude-file-patterns = *.json RootDAQOut-*.root TFileService-*.root
extract-metadata = True
metadata-extractor = json-file-wait
transfer-to = dcache:/pnfs/icarus/persistent/users/icaruspro/fts_test
erase-after-days = .01
run_fts_container.sh
#----------------------------------------
# Start FTS from a podman container image
#----------------------------------------
#
# Adapted from ICARUS FTS setup on evb05
#
# Disclaimer:
# FTS is old, and can not be run on EL9 without significant complications.
# A prebuilt FTS container image is provided to work with Podman.
# To install on if-globusdtn machine, run as user `icaruspro`:
# podman pull imageregistry.fnal.gov/sam-zerg/fermifts:latest
#
# Note:
# A -v mount line is needed for each local or pnfs directory that FTS needs.
# These can be deduced from the configuration files:
# fts.conf
# sam_cp.cfg
#
host=$(hostname -s)
echo "Starting FTS podman container image on ${host}"
# setting the volume paths inside the container to be the same
# matching between actual location and relative paths inside
# syntax: -v /HOST-DIR:/CONTAINER-DIR
# these are real locations on the current host
# using the same as the legacy FTS setup
fts_x509_proxy_dir=/opt/icaruspro/
fts_log_dir=/exp/icarus/data/users/icaruspro/FTS/logs/${host}/fts_logs
fts_db_dir=/var/tmp
fts_samcp_log_dir=/var/tmp
fts_config_dir=~icaruspro/FTS/${host}
fts_dropbox_dir=/pnfs/icarus/scratch/users/icaruspro/fts_test
# copy config files into host-specific config directory
# this is not stricly necessary, but keeps things tidy?
mkdir -p ${fts_config_dir}
cp ${PWD}/fts.conf ${PWD}/sam_cp.cfg ${fts_config_dir}/
# additional things the run command does:
# - set hostname inside the container as ${host}
# - set $USER inside the container as current user
# - set container name to fts_${host}
# - expose port 8787 for localhost:8787 status page
podman run \
-v ${fts_log_dir}:/opt/fts/fts_logs \
-v ${fts_db_dir}:/opt/fts/fts_db \
-v ${fts_config_dir}:/opt/fts/fts_config \
-v ${fts_dropbox_dir}:/storage \
-v ${fts_samcp_log_dir}:/var/tmp \
-v ${fts_x509_proxy_dir}:/opt/fts/fts_proxy \
-p 8787:8787 \
-d \
--network slirp4netns:port_handler=slirp4netns \
--hostname ${host} \
--env USERNAME=${USER} \
--name fts_${host} \
--replace \
fermifts
fts_icarusprodgpvm01.2024-11-14.log
grep test_a1.root /exp/icarus/data/users/icaruspro/FTS/logs/icarusprodgpvm01/fts_logs/fts_icarusprodgpvm01.2024-11-14.log
2024-11-14 09:01:24+0000 [-] Found new file /storage/test_01/a/1/test_a1.root
2024-11-14 09:01:24+0000 [-] New file state for test_a1.root at /storage/test_01/a/1
2024-11-14 09:01:24+0000 [-] Added test_a1.root to metadata queue; queue length is now 1
2024-11-14 09:01:39+0000 [HTTP11ClientProtocol (TLSMemoryBIOProtocol),client] No existing metadata for test_a1.root
2024-11-14 09:01:39+0000 [HTTP11ClientProtocol (TLSMemoryBIOProtocol),client] Extracting metadata from /storage/test_01/a/1/test_a1.root
2024-11-14 09:01:39+0000 [-] Trying to extract metadata for 'test_a1.root'
2024-11-14 09:01:39+0000 [-] Trying mdfilename /storage/test_01/a/1/./test_a1.root.json
2024-11-14 09:01:39+0000 [-] Trying mdfilename /storage/test_01/a/1/./test_a1.root.metadata
2024-11-14 09:01:39+0000 [-] Metadata file not found for 'test_a1.root'; will try again in 10 seconds
2024-11-14 09:01:49+0000 [-] Trying to extract metadata for 'test_a1.root'
2024-11-14 09:01:49+0000 [-] Trying mdfilename /storage/test_01/a/1/./test_a1.root.json
2024-11-14 09:01:49+0000 [-] Trying mdfilename /storage/test_01/a/1/./test_a1.root.metadata
2024-11-14 09:01:49+0000 [-] Metadata file not found for 'test_a1.root'; will try again in 10 seconds
Now I get
2024-11-15 11:09:05+0000 [ProcessRequester,0,] Transfer from /storage/test_01/a/2/test_a2.root to dcache:/pnfs/icarus/persistent/users/icaruspro/fts_test failed
2024-11-15 11:09:05+0000 [ProcessRequester,0,] Failed to create destination directory: dcache:/pnfs/icarus/persistent/users/icaruspro/fts_test
Reason:[Failure instance: Traceback (failure with no frames): <class 'fts.util.CommandError'>: Running command ifdh mkdir_p dcache:/pnfs/icarus/persistent/users/icaruspro/fts_test failed with exit code 255
The same error is obtained running the command on shell
SL7> [icarusgpvm01 05:55:00 AM] ~ > ifdh mkdir_p dcache:/pnfs/icarus/persistent/users/icaruspro/test; echo $?
Working version
[06:55:33 ~/FTS/FTS_config]$ cat sam_cp.cfg
[sam_cp]
logfile=/var/tmp/sam_cp_${USERNAME}.log
debug=1
[dcache_scratch_dst]
dstre: dcache:/pnfs/icarus/scratch/
dstrepl: root://fndcadoor.fnal.gov:1094/pnfs/fnal.gov/usr/icarus/scratch/
[dcache_persistent_dst]
dstre: dcache:/pnfs/icarus/persistent/
dstrepl: root://fndcadoor.fnal.gov:1094/pnfs/fnal.gov/usr/icarus/persistent/
[dcache_src]
srcre: dcache:/pnfs/icarus/
srcrepl: root://fndcadoor.fnal.gov:1094/pnfs/fnal.gov/usr/icarus/
[06:55:40 ~/FTS/FTS_config]$ cat fts.conf
[main]
experiment=icarus
log-file = /opt/fts/fts_logs/fts_${hostname}
filetypes = test
samweb-url = https://samicarus.fnal.gov:8483/sam/icarus/api
sam-web-registry-base-url = https://samicarus.fnal.gov:8483/sam_web_registry
x509-client-certificate = /opt/fts/fts_proxy/icaruspro.Production.proxy
x509-client-key = /opt/fts/fts_proxy/icaruspro.Production.proxy
local-db = /opt/fts/fts_db/${hostname}.db
enable-web-interface = True
enable-state-graph = True
web-interface-port = 8787
#allowed-web-ip = 131.225.*
transfer-limits = enstore:1
max-transfer-limit = 1
transfer-retries = 3
transfer-retry-interval = 300
scanner-queue-limit = 40000
scanner-max-limit = 250000
graphite-stats-server = fifemondata.fnal.gov:2004
service_name = fts_${hostname}
[filetype test]
scan-dirs = /storage
scan-interval = 10
scan-delay = 10
scan-file-patterns = test*.root
scan-exclude-file-patterns = *.json RootDAQOut-*.root TFileService-*.root
extract-metadata = True
metadata-extractor = json-file-wait
transfer-to = dcache:/pnfs/icarus/persistent/users/icaruspro/fts_test
erase-after-days = .01
[06:55:47 ~/FTS/FTS_config]$ cat run_fts_container.sh
#----------------------------------------
# Start FTS from a podman container image
#----------------------------------------
#
# Adapted from ICARUS FTS setup on evb05
#
# Disclaimer:
# FTS is old, and can not be run on EL9 without significant complications.
# A prebuilt FTS container image is provided to work with Podman.
# To install on if-globusdtn machine, run as user `icaruspro`:
# podman pull imageregistry.fnal.gov/sam-zerg/fermifts:latest
#
# Note:
# A -v mount line is needed for each local or pnfs directory that FTS needs.
# These can be deduced from the configuration files:
# fts.conf
# sam_cp.cfg
#
host=$(hostname -s)
echo "Starting FTS podman container image on ${host}"
# setting the volume paths inside the container to be the same
# matching between actual location and relative paths inside
# syntax: -v /HOST-DIR:/CONTAINER-DIR
# these are real locations on the current host
# using the same as the legacy FTS setup
fts_x509_proxy_dir=/opt/icaruspro/
fts_log_dir=/exp/icarus/data/users/icaruspro/FTS/logs/${host}/fts_logs
fts_db_dir=/var/tmp
fts_samcp_log_dir=/var/tmp
fts_config_dir=~icaruspro/FTS/${host}
fts_dropbox_dir=/pnfs/icarus/scratch/users/icaruspro/fts_test
# copy config files into host-specific config directory
# this is not stricly necessary, but keeps things tidy?
mkdir -p ${fts_config_dir}
cp ${PWD}/fts.conf ${PWD}/sam_cp.cfg ${fts_config_dir}/
# additional things the run command does:
# - set hostname inside the container as ${host}
# - set $USER inside the container as current user
# - set container name to fts_${host}
# - expose port 8787 for localhost:8787 status page
podman run \
-v ${fts_log_dir}:/opt/fts/fts_logs \
-v ${fts_db_dir}:/opt/fts/fts_db \
-v ${fts_config_dir}:/opt/fts/fts_config \
-v ${fts_dropbox_dir}:/storage \
-v ${fts_samcp_log_dir}:/var/tmp \
-v ${fts_x509_proxy_dir}:/opt/fts/fts_proxy \
-p 8787:8787 \
-d \
--network slirp4netns:port_handler=slirp4netns \
--hostname ${host} \
--env USERNAME=${USER} \
--name fts_${host} \
--replace \
fermifts
However all files are copied in one single folder. Is it ok for us?
The WG agreed on the destination folder definition
transfer-to = enstore:/pnfs/icarus/archive/sbn/${sbn_dm.detector}/${file_type}/${data_tier}/${data_stream}/${icarus_project.version}/${icarus_project.name}/${icarus_project.stage}/${run_number[8/2]}
@vitodb
Maybe we have to add a session in sam_cp.cfg
, something like:
[enstore_dst]
dstre: enstore:/pnfs/icarus/archive/
dstrepl: root://fndcadoor.fnal.gov:1094/pnfs/fnal.gov/usr/icarus/archive/
Dear all, The disk is again in a critical situation. As a first step, we propose to delete the following datasets/paths:
Please let us know, as soon as possible, if you have any concerns.
Deletion will start in 6h.
Cheers, Matteo