rvalieris / parallel-fastq-dump

parallel fastq-dump wrapper
MIT License
265 stars 33 forks source link

fastq-dump.2.11.0 err: storage exhausted while writing... #48

Closed mike2vandy closed 1 year ago

mike2vandy commented 1 year ago

Hello,

I'm getting this error running parallel-fastq-dump on an HPC.

2022-09-28T15:55:16 fastq-dump.2.11.0 err: storage exhausted while writing file within file system module - system bad file descriptor error fd='4'

The line is present repeatedly in slurm output.

I've read a few other threads here about the same problem. I changed my --tmpdir and --outdir to a scratch drive and yes the temp files are being written there. Both sra-toolkit and parallel-fastq-dump were installed with conda (today).

Could another folder be filling up with and that's why I'm getting the complaint? Any thoughts?

rvalieris commented 1 year ago

hello,

sra-tools also have cache directories it uses to download the data, it could be those, check vdb-config -i: https://github.com/ncbi/sra-tools/wiki/03.-Quick-Toolkit-Configuration https://github.com/ncbi/sra-tools/wiki/05.-Toolkit-Configuration

mike2vandy commented 1 year ago

Disenabling local file-caching seemed to help (I think). Thank you.