Open mfansler opened 5 months ago
I also tried running on a local Docker (mambaorg/micromamba:1.5.6
) rather than HPC, with -e2
and 16 GB total on the container. This was also killed.
我遇到了类似的问题,my codes sratoolkit.3.0.10-centos_linux64/bin/fasterq-dump.3.0.10 --split-3 ./ERR4027871.sra --include-technical -O ~/TOS/output -v -p
我的HPC节点宕机了,该问题反复出现,我也怀疑是新版本的问题,但我不太明白我该如何寻找恰当的证据证明这一点。 My admin told me it was definitely an OOM issue. My HPC node is going down, and the problem keeps coming up, and I suspect it's the newer version, but I'm not sure how I can find proper proof of this.
For completeness, I did eventually get it to complete with the 4 core and 8GB/core configuration. I expect this will be dependent on the size of the data.
@OOAAHH I was able to run your example without any issue. The SRA file is 14GB, and unpacked it leads to a 26GB FASTQ file. Are you sure you are not running out of disk quota?
Some things I see: Your example does not provide a scratch space to store the temporary files, so they will be written to a temporary folder in the current directory. Also, unless ~/TOS/output
is symlinked elsewhere, that is under user home (~/
) which on typical HPC clusters is 100GB. Lastly, have you configured VDB so that the NCBI cache is not under user home (the default)? That is, from worst case assumptions, this single operation could occupy up to 75GB of disk at maximum occupancy.
It should further be noted that this particular data was uploaded as an aligned BAM. Dumping out a FASTQ file from a BAM-derived SRA file is mostly useless for scRNA-seq because any cell barcodes and UMIs will only be in the tags and not get properly dumped out. I don't know what you plan with the data, but for processing as scRNA-seq you are likely better off downloading the BAM (and .bai) directly from the ENA (see ERR4027871).
First of all, thank you for your prompt and detailed response. Your insights have been incredibly helpful and have shed light on several oversight areas in my approach.
Disk Quota and Cache Settings: You're absolutely right; I hadn't fully considered the disk quota and the cache parameter settings. I've been so focused on monitoring my memory usage that I overlooked the capacity of the disk. Based on your advice, I will start specifying a scratch space for temporary files in my commands and configuration to manage disk space more efficiently.
Data for scRNA-seq Projects: Also, you've made an excellent point regarding the use of data with UMIs and barcodes for my large-scale single-cell atlas project. It appears I may have encountered issues with some of the .bai files, which complicates the process. Following your suggestion, I will explore downloading the necessary indexed data directly from BioStudies:E-MTAB-8221.
Glad to help. Fortunately, the .bai files shouldn't be essential - one can reindex with samtools index
to generate new ones.
I hope this message finds you well. I wanted to take a moment to update you on the significant progress I've made, thanks in large part to your invaluable advice and guidance.
Following your suggestions, I revisited my BAM files and utilized samtools to reindex them and examine the metadata more closely. This process was incredibly enlightening; not only was I able to generate new .bai files successfully, but I also uncovered crucial information embedded within the BAM files. The metadata and initial read segments revealed essential details such as cell barcodes, UMIs, and sample identifiers - precisely the data I needed for my single-cell RNA sequencing analysis.
Discovering this information was particularly critical for me, given the challenging network environment I am operating in, which makes downloading genomic data quite difficult. Being able to extract and utilize data already within my possession has saved me a tremendous amount of time!
My codes:
samtools view -H
samtools view my.bam | head
I have installed
sra-tools
v3.0.10 distributed from Bioconda for linux-64 platform. Runningfasterq-dump
occupies far more RAM than the flags would imply (default 100 MB/core) or I have ever encountered before using identical commands. In previous versions, I always used 8 cores + 1GB/core, with-t
pointing to local scratch disk and VDB configured with plenty of room for thencbi/sra
cache. E.g.,Using the above for any SRRs from PRJNA544617 ends with LSF killing my jobs for exceeding memory. I have retried with:
all eventually killed for overallocating memory. I am currently running again with 4 cores + 8 GB/core (32 GB total).
This makes me suspect there is something off in this version with possibly:
/tmp/
instead of the designated-t
path, i.e.,--mem
argument (or not reading the default).Please let me know if I can provide any additional information.