Closed francisfa closed 4 years ago
@francisfa Could you tell me how this problem was resolved at last? I've just encountered the same situation and suspect the problem lies in my data. Thanks very much!
@ikohideonbush Sorry for the delay. You should check the completeness of your data by using md5sum
. Hope this helps.
@ikohideonbush Sorry for the delay. You should check the completeness of your data by using
md5sum
. Hope this helps. I realized it indeed was the fastq files' problem. Thanks!
@francisfa @ikohideonbush I've just encountered the same situation. As I am a novice and I have no idea where the problem lies in. I tried to run the md5sum.,it failed. Could you tell me how to solve this problem? Many thanks.
@francisfa @ikohideonbush I've just encountered the same situation. As I am a novice and I have no idea where the problem lies in. I tried to run the md5sum.,it failed. Could you tell me how to solve this problem? Many thanks.
Hi friend, I am not very sure what your question is. Do you mean that you opened a shell, entered the data directory and run “md5sum -c <*the-md5file-you-were-given.txt>” and finally get error message?
@ikohideonbush Thanks for your reply, I didn't run the right code. But I fixed the issue finally. I guess my problem is the ystem. My desktop linux is 32GB ram, 4 cores×8 logical processor. I ran the same code on our university cluster, it works. Thanks again!
I encountered a similar error with failure at the chunk_reads
stage. While I found this failure is documented on the CR website: https://kb.10xgenomics.com/hc/en-us/articles/360004348051-Why-did-cellranger-count-fail-in-the-CHUNK-READS-stage-
It did not solve my issue. In my case the _stdout
log is error free but _stderr
contains:
[stderr]
thread 'main' panicked at 'index out of bounds: the len is 10 but the index is 10', libcore/slice/mod.rs:2046:10
[stderr]
thread 'main' panicked at 'index out of bounds: the len is 10 but the index is 10', libcore/slice/mod.rs:2046:10
note: Run with `RUST_BACKTRACE=1` for a backtrace.
Can anyone help? Thanks!
can you run it again after setting export RUST_BACKTRACE=1
and post the stack trace?
Also, feel free to directly contact support at support@10xgenomics.com for help debugging or fixing any issues.
@evolvedmicrobe thanks, here's the full traceback for another run:
[stderr]
thread 'main' panicked at 'index out of bounds: the len is 7 but the index is 7', libcore/slice/mod.rs:2046:10
stack backtrace:
0: std::sys::unix::backtrace::tracing::imp::unwind_backtrace
at libstd/sys/unix/backtrace/tracing/gcc_s.rs:49
1: std::sys_common::backtrace::print
at libstd/sys_common/backtrace.rs:71
at libstd/sys_common/backtrace.rs:59
2: std::panicking::default_hook::{{closure}}
at libstd/panicking.rs:211
3: std::panicking::default_hook
at libstd/panicking.rs:227
4: std::panicking::rust_panic_with_hook
at libstd/panicking.rs:477
5: std::panicking::continue_panic_fmt
at libstd/panicking.rs:391
6: rust_begin_unwind
at libstd/panicking.rs:326
7: core::panicking::panic_fmt
at libcore/panicking.rs:77
8: core::panicking::panic_bounds_check
at libcore/panicking.rs:59
9: chunk_reads::main
at libcore/slice/mod.rs:2046
at libcore/slice/mod.rs:1914
at liballoc/vec.rs:1725
at chunk_reads/src/main.rs:201
at chunk_reads/src/main.rs:123
at chunk_reads/src/main.rs:106
10: std::rt::lang_start::{{closure}}
at libstd/rt.rs:74
11: std::panicking::try::do_call
at libstd/rt.rs:59
at libstd/panicking.rs:310
12: __rust_maybe_catch_panic
at libpanic_unwind/lib.rs:103
13: std::rt::lang_start_internal
at libstd/panicking.rs:289
at libstd/panic.rs:392
at libstd/rt.rs:58
14: main
15: __libc_start_main
16: <unknown>
at ../sysdeps/x86_64/elf/start.S:103
Update: in my case the error occurred because of trimming I applied to R2 reads to remove adapters. Not doing the trimming (the trimming is not recommended by 10X anyway) leads to no error
Thanks @chris-rands, glad this issue was solved. This code will be updated in future versions and will likely have a different error message.
I am having similar issue and I am still not able to resolve
In case someone still encountered this problem, a reason can be a corrupted/incomplete download of the FASTQ files. I solved by re-downloading the files.
Hello, I am having similar problem. stdout return
[stdout]
chunk_reads vVERSION
chunks: [("R1", "/test/Batch1_S1_L006_R1_001.fastq.gz", false), ("R2", "/test/Batch1_S1_L006_R2_001.fastq.gz", false)]
opening: "/test/Batch1_S1_L006_R1_001.fastq.gz"
opening: "/test/Batch1_S1_L006_R2_001.fastq.gz"
got recs: 28810048
got recs: 48304343
error: corrupt gzip stream does not have a matching checksum
caused by: corrupt gzip stream does not have a matching checksum
------------
If you believe this is a bug in chunk_reads, please report a bug to support@10xgenomics.com.
running chunk reads: [['chunk_reads', '--reads-per-fastq', '5000000', '/test/SC_RNA_COUNTER_CS/SC_RNA_COUNTER/_BASIC_SC_RNA_COUNTER/CHUNK_READS/fork0/chnk0-u90ff8a153e/files/', 'fastq_chunk', '--martian-args', 'chunk_args.json', '--compress', 'lz4']]
However, when I ran md5sum -c
on these two files, it returned OK for both. Is there any other reason or check that I should do? Or If anyone know how can I solve this problem?
Thanks in advance.
Hi, There are some errors about that returned non-zero exit status1. I think this is a problem in python. But i do not how to fix it? I do not think I could change the python script. Any advices is OK. Thank you!