10XGenomics / cellranger

10x Genomics Single Cell Analysis
https://www.10xgenomics.com/support/software/cell-ranger
Other
366 stars 94 forks source link

returned non-zero exit status 1 #17

Closed francisfa closed 4 years ago

francisfa commented 6 years ago

Hi, There are some errors about that returned non-zero exit status1. I think this is a problem in python. But i do not how to fix it? I do not think I could change the python script. Any advices is OK. Thank you!

2018-10-11 21:06:25 [runtime] (update)          ID.normal2.SC_RNA_COUNTER_CS.SC_RNA_COUNTER._BASIC_SC_RNA_COUNTER.CHUNK_READS.fork0 chunks_running
2018-10-11 21:12:26 [runtime] (update)          ID.normal2.SC_RNA_COUNTER_CS.SC_RNA_COUNTER._BASIC_SC_RNA_COUNTER.CHUNK_READS.fork0 chunks_running
2018-10-11 21:15:08 [runtime] (failed)          ID.normal2.SC_RNA_COUNTER_CS.SC_RNA_COUNTER._BASIC_SC_RNA_COUNTER.CHUNK_READS

[error] Pipestance failed. Error log at:
normal2/SC_RNA_COUNTER_CS/SC_RNA_COUNTER/_BASIC_SC_RNA_COUNTER/CHUNK_READS/fork0/chnk0-u55bdbf4968/_errors

Log message:
Traceback (most recent call last):
  File "/test/software/cellranger-2.1.1/martian-cs/2.3.2/adapters/python/martian_shell.py", line 529, in _main
    stage.main()
  File "/test/software/cellranger-2.1.1/martian-cs/2.3.2/adapters/python/martian_shell.py", line 495, in main
    self._run(lambda: self._module.main(args, outs))
  File "/test/software/cellranger-2.1.1/martian-cs/2.3.2/adapters/python/martian_shell.py", line 464, in _run
    cmd()
  File "/test/software/cellranger-2.1.1/martian-cs/2.3.2/adapters/python/martian_shell.py", line 495, in <lambda>
    self._run(lambda: self._module.main(args, outs))
  File "/test/software/cellranger-2.1.1/cellranger-cs/2.1.1/mro/stages/common/chunk_reads/__init__.py", line 53, in main
    tk_subproc.check_call(chunk_reads_args)
  File "/test/software/cellranger-2.1.1/cellranger-cs/2.1.1/tenkit/lib/python/tenkit/log_subprocess.py", line 37, in check_call
    return subprocess.check_call(*args, **kwargs)
  File "/test/software/cellranger-2.1.1/miniconda-cr-cs/4.3.21-miniconda-cr-cs-c9/lib/python2.7/subprocess.py", line 186, in check_call
    raise CalledProcessError(retcode, cmd)
CalledProcessError: Command '['chunk_reads', '--reads-per-fastq', '5000000', '/test/XX/RNAseq/Batch2_normal/normal2/SC_RNA_COUNTER_CS/SC_RNA_COUNTER/_BASIC_SC_RNA_COUNTER/CHUNK_READS/fork0/chnk0-u55bdbf4968/files/', 'fastq_chunk', '--martian-args', 'chunk_args.json', '--compress', 'lz4']' returned non-zero exit status 1
ikohideonbush commented 5 years ago

@francisfa Could you tell me how this problem was resolved at last? I've just encountered the same situation and suspect the problem lies in my data. Thanks very much!

francisfa commented 5 years ago

@ikohideonbush Sorry for the delay. You should check the completeness of your data by using md5sum. Hope this helps.

ikohideonbush commented 5 years ago

@ikohideonbush Sorry for the delay. You should check the completeness of your data by using md5sum. Hope this helps. I realized it indeed was the fastq files' problem. Thanks!

Yale73 commented 5 years ago

@francisfa @ikohideonbush I've just encountered the same situation. As I am a novice and I have no idea where the problem lies in. I tried to run the md5sum.,it failed. Could you tell me how to solve this problem? Many thanks.

ikohideonbush commented 5 years ago

@francisfa @ikohideonbush I've just encountered the same situation. As I am a novice and I have no idea where the problem lies in. I tried to run the md5sum.,it failed. Could you tell me how to solve this problem? Many thanks.

Hi friend, I am not very sure what your question is. Do you mean that you opened a shell, entered the data directory and run “md5sum -c <*the-md5file-you-were-given.txt>” and finally get error message?

Yale73 commented 5 years ago

@ikohideonbush Thanks for your reply, I didn't run the right code. But I fixed the issue finally. I guess my problem is the ystem. My desktop linux is 32GB ram, 4 cores×8 logical processor. I ran the same code on our university cluster, it works. Thanks again!

chris-rands commented 5 years ago

I encountered a similar error with failure at the chunk_reads stage. While I found this failure is documented on the CR website: https://kb.10xgenomics.com/hc/en-us/articles/360004348051-Why-did-cellranger-count-fail-in-the-CHUNK-READS-stage-

It did not solve my issue. In my case the _stdout log is error free but _stderr contains:

[stderr]
thread 'main' panicked at 'index out of bounds: the len is 10 but the index is 10', libcore/slice/mod.rs:2046:10
[stderr]
thread 'main' panicked at 'index out of bounds: the len is 10 but the index is 10', libcore/slice/mod.rs:2046:10
note: Run with `RUST_BACKTRACE=1` for a backtrace.

Can anyone help? Thanks!

evolvedmicrobe commented 5 years ago

can you run it again after setting export RUST_BACKTRACE=1 and post the stack trace?

Also, feel free to directly contact support at support@10xgenomics.com for help debugging or fixing any issues.

chris-rands commented 5 years ago

@evolvedmicrobe thanks, here's the full traceback for another run:

[stderr]
thread 'main' panicked at 'index out of bounds: the len is 7 but the index is 7', libcore/slice/mod.rs:2046:10
stack backtrace:
   0: std::sys::unix::backtrace::tracing::imp::unwind_backtrace
             at libstd/sys/unix/backtrace/tracing/gcc_s.rs:49
   1: std::sys_common::backtrace::print
             at libstd/sys_common/backtrace.rs:71
             at libstd/sys_common/backtrace.rs:59
   2: std::panicking::default_hook::{{closure}}
             at libstd/panicking.rs:211
   3: std::panicking::default_hook
             at libstd/panicking.rs:227
   4: std::panicking::rust_panic_with_hook
             at libstd/panicking.rs:477
   5: std::panicking::continue_panic_fmt
             at libstd/panicking.rs:391
   6: rust_begin_unwind
             at libstd/panicking.rs:326
   7: core::panicking::panic_fmt
             at libcore/panicking.rs:77
   8: core::panicking::panic_bounds_check
             at libcore/panicking.rs:59
   9: chunk_reads::main
             at libcore/slice/mod.rs:2046
             at libcore/slice/mod.rs:1914
             at liballoc/vec.rs:1725
             at chunk_reads/src/main.rs:201
             at chunk_reads/src/main.rs:123
             at chunk_reads/src/main.rs:106
  10: std::rt::lang_start::{{closure}}
             at libstd/rt.rs:74
  11: std::panicking::try::do_call
             at libstd/rt.rs:59
             at libstd/panicking.rs:310
  12: __rust_maybe_catch_panic
             at libpanic_unwind/lib.rs:103
  13: std::rt::lang_start_internal
             at libstd/panicking.rs:289
             at libstd/panic.rs:392
             at libstd/rt.rs:58
  14: main
  15: __libc_start_main
  16: <unknown>
             at ../sysdeps/x86_64/elf/start.S:103
chris-rands commented 5 years ago

Update: in my case the error occurred because of trimming I applied to R2 reads to remove adapters. Not doing the trimming (the trimming is not recommended by 10X anyway) leads to no error

evolvedmicrobe commented 4 years ago

Thanks @chris-rands, glad this issue was solved. This code will be updated in future versions and will likely have a different error message.

bazok100 commented 4 years ago

I am having similar issue and I am still not able to resolve

danieletavernari commented 4 years ago

In case someone still encountered this problem, a reason can be a corrupted/incomplete download of the FASTQ files. I solved by re-downloading the files.

ThuyTien1 commented 1 year ago

Hello, I am having similar problem. stdout return

[stdout]
chunk_reads vVERSION
chunks: [("R1", "/test/Batch1_S1_L006_R1_001.fastq.gz", false), ("R2", "/test/Batch1_S1_L006_R2_001.fastq.gz", false)]
opening: "/test/Batch1_S1_L006_R1_001.fastq.gz"
opening: "/test/Batch1_S1_L006_R2_001.fastq.gz"
got recs: 28810048
got recs: 48304343
error: corrupt gzip stream does not have a matching checksum
caused by: corrupt gzip stream does not have a matching checksum
------------
If you believe this is a bug in chunk_reads, please report a bug to support@10xgenomics.com.

running chunk reads: [['chunk_reads', '--reads-per-fastq', '5000000', '/test/SC_RNA_COUNTER_CS/SC_RNA_COUNTER/_BASIC_SC_RNA_COUNTER/CHUNK_READS/fork0/chnk0-u90ff8a153e/files/', 'fastq_chunk', '--martian-args', 'chunk_args.json', '--compress', 'lz4']]

However, when I ran md5sum -c on these two files, it returned OK for both. Is there any other reason or check that I should do? Or If anyone know how can I solve this problem?

Thanks in advance.