Open davidaray opened 1 week ago
Hi,
Thank you for reporting this issue.
Indeed, the pipeline generates a Snakefile that it subsequently calls. If the pipeline is run in parallel from the same directory, conflicts can occur. We can fix this issue by appending a suffix to the Snakefile based on the current time in nanoseconds to avoid duplicates. File writing tasks are slow enough for this solution to be effective. Once the issue is resolved, we will get back to you.
Best regards, M-D
Hello,
You will find the fix on the fix_issue_23
branch (apologies for having placed it on an older branch). To retrieve this fix, you can :
Using the command:
git clone https://github.com/DrosophilaGenomeEvolution/TrEMOLO.git -b fix_issue_23
Or from your existing repository:
git pull
git checkout fix_issue_23
Best regards, M-D
Thank you. I'll try this.
Another question. In the .out files I get from the run, I see two conflicting communications.
A log file is pasted on the right in the image below.
On line 266, the line says no error detected. Yet, on the final line, 313, we see, "AN ERROR OCCURRED!!
How might I track down this error?
Hi,
It is important to note that there are three separate Snakemake instances regarding error handling: one for the INSIDER part, another for the OUTSIDER part, and a third one that triggers the two previously mentioned workflows.
The line "DONE NO ERROR DETECTED ✔" indicates that the INSIDER or OUTSIDER part (depending on your choices) has completed successfully. On the other hand, the message "AN ERROR OCCURRED !!" signals that the main Snakemake process (run.snk
) detected an error after the completion of both the INSIDER and OUTSIDER workflows.
I suspect this error might be caused by an attempt to send a signal to the loading process (which is not critical for the pipeline) due to concurrent process issues. If this is the case, it will have no impact on the results.
We will make changes to ensure that the messages are clearer. Thank you for reporting this issue.
Best, M-D
Thanks for what appears to be a useful tool. My question concerns running the script in parallel.
I have several dozen (n=47) sets of data to analyze and am interested in running several of them simultaneously.
I tried that but the pipeline seems to get in its own way by modifying a snake file in the software directory.
I haven't taken the time to replicate the error because I'm currently running my 47 analyses sequentially and I don't want to stop them.
I can replicate the error if requested.
I found this out because I would set all of the analyses running but only a few would finish and those few were started several hours apart.
Reading the log files told me that a .snk file (I think) was being seen as 'in use' and thus, the job that was currently running was killing the job that wanted to run.
Is this something that's resolvable?
Thanks.