Closed naumenko-sa closed 5 years ago
Sergey; Thanks for the diagnosis and the fix. I'm confused as to why that step would fail when run in a pipe but work cleanly when done separately. Do you have an idea of what is failing on your system exactly? The only thing I'm concerned about in your change is the impact of doing a larger unzip on runtime and filesystem sizes. That particular line was trying to avoid unpacking much of the file and just grabbing a single header line that we need. If the workaround doesn't create anything large on disk and also works quickly that works great with me as well. Thanks again.
Thanks Brad!
It is a puzzle why it is not working, my system is pretty standard. Something with the pipe buffer (which was increased in the latest bash) overflow, while head -n1 needs just a bit of information, i.e. pipe synchronization issue? The fix does not unpack the huge file - you see I've changed chr1 to chrM. It is quite small. Interestingly, the larger pipe down the script which processes all the huge files works well (but it processes every line of the files, not just head -n1).
S.
Found a typo which I introduced in dbnsftp recipe. Sorry about that. Please merge. https://github.com/chapmanb/cloudbiolinux/pull/297
Hello, cloudbiolinux community!
When installing bcbio_nextgen 1.1.5 from scratch, installing --datatarget dbnsfp failed for me.
I was able to track down the problem in the recipe to that line:
header.txt is created, but then script fails without any messages, test1 is not printed.
works well.
I've changed the recipe for grch37 and hg38 accordingly here https://github.com/chapmanb/cloudbiolinux/pull/295
Sergey