Open xshah opened 9 years ago
Yes I have also experienced this issue.
Stephen N. Floor (Sent from handheld)
On Apr 15, 2015, at 1:33 PM, Hardik Shah notifications@github.com wrote:
Has anyone else seen this issue ? or maybe I am doing something incorrectly ?? @yarden
— Reply to this email directly or view it on GitHub.
I circumvented it by copying the index again ( and again ), but at this point we will be creating 100000s of files...sigh. Did you have a better solution ?
Only run one job at a time with as many processors as you can. :(
Stephen N. Floor (Sent from handheld)
On Apr 15, 2015, at 1:41 PM, Hardik Shah notifications@github.com wrote:
I circumvented it by copying the index again ( and again ), but at this point we will be creating 100000s of files...sigh. Did you have a better solution ?
— Reply to this email directly or view it on GitHub.
Stephen, you can actually copy just the shelve file multiple times and run more samples concurrently if that helps.
Hi all,
Thanks for reporting this. Perhaps I'm misunderstanding - the docs for shelve say multiple reads are fine ("Multiple simultaneous read accesses are safe"), from here: https://docs.python.org/2/library/shelve.html
and I have not had an issue with it. Are you thinking of a case of multiple processors or multiple jobs? I'd love to fix this. I suppose the index can be converted to an sqlite database, but I always thought shelve was multi-read safe.
I ran into this when running simultaneous miso jobs on different bam files but the same index. Multithreading was fine.
To clarify, this is different jobs -- i.e. distinct jobs on a cluster, right? What was the error that you got?
Sorry, I meant multiple jobs on the same host. I can't reproduce the error right now unfortunately as my only linux box is down but it was something like cannot open file - I realize that isn't super helpful and sorry about that. It happened any time a miso job was running accessing an index and a second miso job was started, so should be easy to reproduce.
Has anyone else seen this issue ? or maybe I am doing something incorrectly ?? @yarden