Closed mike2vandy closed 1 year ago
hello,
sra-tools also have cache directories it uses to download the data, it could be those, check vdb-config -i
:
https://github.com/ncbi/sra-tools/wiki/03.-Quick-Toolkit-Configuration
https://github.com/ncbi/sra-tools/wiki/05.-Toolkit-Configuration
Disenabling local file-caching seemed to help (I think). Thank you.
Hello,
I'm getting this error running parallel-fastq-dump on an HPC.
2022-09-28T15:55:16 fastq-dump.2.11.0 err: storage exhausted while writing file within file system module - system bad file descriptor error fd='4'
The line is present repeatedly in slurm output.
I've read a few other threads here about the same problem. I changed my --tmpdir and --outdir to a scratch drive and yes the temp files are being written there. Both sra-toolkit and parallel-fastq-dump were installed with conda (today).
Could another folder be filling up with and that's why I'm getting the complaint? Any thoughts?