Closed ptrebert closed 1 year ago
I can't reproduce this, but it likely has something to do with the bazel py-binary wrapper.
Can you shell into the container and run classsify_taxonomy --help
Can you try also with an extra parameter, --bind '/somewhere/safe/and/writable:/var/tmp'
Are there values for the env vars TMP or TMPDIR?
Thanks for looking into this.
Yes, classify_taxonomy [--help]
works (as already indicated above). I can mount a writable location to /var/tmp
in the container and also create files w/o any problem. The default value for TMP is set by the cluster scheduler (PBS) as evident from the log file I had attached:
/var/tmp/pbs.5801516.hpc-batch/ [...]
That's the TMP location for the job local to the execution host. Is it possible that there was not enough space in TMP, (mis-) leading to the error? Can you give me a rough number how much space is needed in TMP to run the FCS pipeline?
Around 200 megs of space is needed. But if that was the problem it wouldnt really execute at all. Im now very confused. unless your local execution is different from your cluster's execution.
Hm, I tested another example with half of the genome as input, and the run finished successfully. I am going to use the complete genome again, maybe the above was a weird machine error ...
Glad it worked for you. Hope to get your feedback once you try out the newly released v0.4.0
Describe the bug v0.3.0
run_fcsgx.py
fails withTo Reproduce See attached stderr output of cluster job for full command
Software versions (please complete the following information):
Log Files Can make a debug run if absolutely necessary fcstest.stderr.log
Additional context
fcs-gx.0.2.1.sif
)classify_taxonomy
is indeed present at the expected location and executable. So, unless you have seen this before, I would naively assume that this is not the true cause of the failure.I am going to run a test with a smaller genome and the test database to see what happens.
+Peter