Open ParkvilleData opened 4 years ago
Hey @ParkvilleData, just make sure you're formatting your code with three backticks, then starting the code on the next line, like:
The single backticks `
are for in-line entries, eg: `code`
-> code
.
Can you post the actual error you're seeing, or can you confirm none of your tasks are returning a valid value for "docker" in the runtime section.
Thanks @illusional !
Here is the complete error and I can confirm there is no valid value for docker in the run time section.
It also completely hangs when I run this and I have to kill the process.
The Local provider is placed right after the Slurm provider in the provider block.
[2020-09-17 21:41:42,92] [info] MaterializeWorkflowDescriptorActor [866769d0]: Call-to-Backend assignments: hostremoval_subworkflow.interleave_task -> Local, geneprediction_subworkflow.prodigal_task -> Local, qc_subworkflow.flash_task -> Local, assembly_subworkflow.blast_task -> Local, metaGenPipe.merge_task -> Local, geneprediction_subworkflow.diamond_task -> Local, assembly_subworkflow.metaspades_task -> Local, assembly_subworkflow.megahit_task -> Local, hostremoval_subworkflow.hostremoval_task -> Local, geneprediction_subworkflow.collation_task -> Local, assembly_subworkflow.idba_task -> Local, qc_subworkflow.trimmomatic_task -> Local, metaGenPipe.taxonclass_task -> Local, qc_subworkflow.fastqc_task -> Local, metaGenPipe.multiqc_task -> Local
[2020-09-17 21:41:42,97] [error] Error parsing generated wdl:
task submit {
String job_id
String job_name
String cwd
String out
String err
String script
String job_shell
String head_directory = "/data/MGP"
String singularity_image = "/data/MGP/sing/metaGenPipe.simg"
command {
singularity run -B ${head_directory}:${head_directory} ${singularity_image} /bin/bash ${script}
}
}
task submit_docker {
String job_id
String job_name
String cwd
String out
String err
String script
String job_shell
String docker_cwd
String docker_cid
String docker_script
String docker_out
String docker_err
String head_directory = "/data/MGP"
String singularity_image = "/data/MGP/sing/metaGenPipe.simg"
command {
# make sure there is no preexisting Docker CID file
rm -f ${docker_cid}
# run as in the original configuration without --rm flag (will remove later)
docker run \
--cidfile ${docker_cid} \
-i \
${"--user " + docker_user} \
--entrypoint ${job_shell} \
-v ${cwd}:${docker_cwd}:delegated \
${docker} ${docker_script}
# get the return code (working even if the container was detached)
rc=$(docker wait cat ${docker_cid})
# remove the container after waiting
docker rm cat ${docker_cid}
# return exit code
exit $rc
}
}
task kill_docker {
String job_id
String docker_cid
String job_shell
command {
docker kill cat ${docker_cid}
}
}
java.lang.RuntimeException: Error parsing generated wdl:
task submit {
String job_id
String job_name
String cwd
String out
String err
String script
String job_shell
String head_directory = "/data/MGP"
String singularity_image = "/data/MGP/sing/metaGenPipe.simg"
command {
singularity run -B ${head_directory}:${head_directory} ${singularity_image} /bin/bash ${script}
}
}
task submit_docker {
String job_id
String job_name
String cwd
String out
String err
String script
String job_shell
String docker_cwd
String docker_cid
String docker_script
String docker_out
String docker_err
String head_directory = "/data/MGP"
String singularity_image = "/data/MGP/sing/metaGenPipe.simg"
command {
# make sure there is no preexisting Docker CID file
rm -f ${docker_cid}
# run as in the original configuration without --rm flag (will remove later)
docker run \
--cidfile ${docker_cid} \
-i \
${"--user " + docker_user} \
--entrypoint ${job_shell} \
-v ${cwd}:${docker_cwd}:delegated \
${docker} ${docker_script}
# get the return code (working even if the container was detached)
rc=$(docker wait `cat ${docker_cid}`)
# remove the container after waiting
docker rm `cat ${docker_cid}`
# return exit code
exit $rc
}
}
task kill_docker {
String job_id
String docker_cid
String job_shell
command {
docker kill `cat ${docker_cid}`
}
}
at cromwell.backend.impl.sfs.config.ConfigWdlNamespace.<init>(ConfigWdlNamespace.scala:55)
at cromwell.backend.impl.sfs.config.ConfigInitializationActor.configWdlNamespace$lzycompute(ConfigInitializationActor.scala:39)
at cromwell.backend.impl.sfs.config.ConfigInitializationActor.configWdlNamespace(ConfigInitializationActor.scala:39)
at cromwell.backend.impl.sfs.config.ConfigInitializationActor.declarationValidations$lzycompute(ConfigInitializationActor.scala:42)
at cromwell.backend.impl.sfs.config.ConfigInitializationActor.declarationValidations(ConfigInitializationActor.scala:41)
at cromwell.backend.impl.sfs.config.ConfigInitializationActor.runtimeAttributesBuilder$lzycompute(ConfigInitializationActor.scala:53)
at cromwell.backend.impl.sfs.config.ConfigInitializationActor.runtimeAttributesBuilder(ConfigInitializationActor.scala:52)
at cromwell.backend.standard.StandardInitializationActor.coerceDefaultRuntimeAttributes(StandardInitializationActor.scala:82)
at cromwell.backend.BackendWorkflowInitializationActor.initSequence(BackendWorkflowInitializationActor.scala:155)
at cromwell.backend.BackendWorkflowInitializationActor.initSequence$(BackendWorkflowInitializationActor.scala:153)
at cromwell.backend.standard.StandardInitializationActor.initSequence(StandardInitializationActor.scala:44)
at cromwell.backend.BackendWorkflowInitializationActor$$anonfun$receive$1.$anonfun$applyOrElse$1(BackendWorkflowInitializationActor.scala:146)
at cromwell.backend.BackendLifecycleActor.performActionThenRespond(BackendLifecycleActor.scala:44)
at cromwell.backend.BackendLifecycleActor.performActionThenRespond$(BackendLifecycleActor.scala:40)
at cromwell.backend.standard.StandardInitializationActor.performActionThenRespond(StandardInitializationActor.scala:44)
at cromwell.backend.BackendWorkflowInitializationActor$$anonfun$receive$1.applyOrElse(BackendWorkflowInitializationActor.scala:146)
at akka.actor.Actor.aroundReceive(Actor.scala:539)
at akka.actor.Actor.aroundReceive$(Actor.scala:537)
at cromwell.backend.standard.StandardInitializationActor.aroundReceive(StandardInitializationActor.scala:44)
at akka.actor.ActorCell.receiveMessage(ActorCell.scala:612)
at akka.actor.ActorCell.invoke(ActorCell.scala:581)
at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:268)
at akka.dispatch.Mailbox.run(Mailbox.scala:229)
at akka.dispatch.Mailbox.exec(Mailbox.scala:241)
at akka.dispatch.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
at akka.dispatch.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
at akka.dispatch.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
at akka.dispatch.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
Caused by: wdl.draft2.parser.WdlParser$SyntaxError: ERROR: Variable docker does not reference any declaration in the task (line 50, col 7):
${docker} ${docker_script}
^
Task defined here (line 20, col 6):
task submit_docker {
^
at wdl.draft2.model.WdlNamespace$.$anonfun$apply$55(WdlNamespace.scala:444)
at scala.collection.TraversableLike$WithFilter.$anonfun$map$2(TraversableLike.scala:827)
at scala.collection.immutable.List.foreach(List.scala:392)
at scala.collection.TraversableLike$WithFilter.map(TraversableLike.scala:826)
at wdl.draft2.model.WdlNamespace$.$anonfun$apply$52(WdlNamespace.scala:442)
at scala.collection.TraversableLike.$anonfun$flatMap$1(TraversableLike.scala:245)
at scala.collection.Iterator.foreach(Iterator.scala:941)
at scala.collection.Iterator.foreach$(Iterator.scala:941)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1429)
at scala.collection.IterableLike.foreach(IterableLike.scala:74)
at scala.collection.IterableLike.foreach$(IterableLike.scala:73)
at scala.collection.AbstractIterable.foreach(Iterable.scala:56)
at scala.collection.TraversableLike.flatMap(TraversableLike.scala:245)
at scala.collection.TraversableLike.flatMap$(TraversableLike.scala:242)
at scala.collection.AbstractTraversable.flatMap(Traversable.scala:108)
at wdl.draft2.model.WdlNamespace$.$anonfun$apply$51(WdlNamespace.scala:441)
at scala.collection.TraversableLike.$anonfun$flatMap$1(TraversableLike.scala:245)
at scala.collection.Iterator.foreach(Iterator.scala:941)
at scala.collection.Iterator.foreach$(Iterator.scala:941)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1429)
at scala.collection.IterableLike.foreach(IterableLike.scala:74)
at scala.collection.IterableLike.foreach$(IterableLike.scala:73)
at scala.collection.AbstractIterable.foreach(Iterable.scala:56)
at scala.collection.TraversableLike.flatMap(TraversableLike.scala:245)
at scala.collection.TraversableLike.flatMap$(TraversableLike.scala:242)
at scala.collection.AbstractTraversable.flatMap(Traversable.scala:108)
at wdl.draft2.model.WdlNamespace$.apply(WdlNamespace.scala:440)
at wdl.draft2.model.WdlNamespace$.$anonfun$load$1(WdlNamespace.scala:174)
at scala.util.Try$.apply(Try.scala:213)
at wdl.draft2.model.WdlNamespace$.load(WdlNamespace.scala:169)
at wdl.draft2.model.WdlNamespace$.loadUsingSource(WdlNamespace.scala:161)
at cromwell.backend.impl.sfs.config.ConfigWdlNamespace.<init>(ConfigWdlNamespace.scala:53)
... 27 common frames omitted
[2020-09-17 21:41:46,29] [info] Not triggering log of token queue status. Effective log interval = None
Ah interesting! I'm pretty sure the fix is to add:
String? docker
Into your runtime attributes. But this feels like it should be by default, and not including it shouldn't break the tasks.
In the runtime attributes in the config file or each of the tasks?
I added it to the config and it did the same thing.
Does having a Local provider use Docker by default?
Thanks for your help!
Sorry @ParkvilleData, I mean the runtime-attributes
in your cromwell.conf
(eg: Cromwell containers tute | my slurm example conf), so it would now look something like:
runtime-attributes = """
String? docker
String head_directory = "/data/MGP"
String singularity_image = "/data/MGP/sing/metaGenPipe.simg"
"""
Yep you're right, the Local
template by default uses Docker, and for some reason overriding the runtime-attributes
in your config breaks the submit-docker
task (even if it doesn't get used) - though I'm struggling to find references in the docs and I don't agree it should break.
Yep, that's exactly what I have
runtime-attributes = """
String? docker
String head_directory = "/data/MGP"
String singularity_image = "/data/MGP/sing/metaGenPipe.simg"
"""
I couldn't find anything in the docs either
Hi,
I was able to get past it by adding the following to my runtime attributes.
Thanks for your help!
runtime-attributes = """ String? docker String? docker_user String head_directory = "/data/MGP" String singularity_image = "/data/MGP/sing/metaGenPipe.simg" """
Hi,
I have built a WDL workflow which works well with SLURM but now I am trying to get it to be able to be run on a standalone server.
I have Slurm as my provider and have created one for Local.
` Local { actor-factory = "cromwell.backend.impl.sfs.config.ConfigBackendLifecycleActorFactory" config {
Oddly, when running the workflow I get a submit docker error. ie. as per below. I have no idea why it's looking for docker as I'm not knowingly using it. I'm not using docker in my run time parameters.
I have been able to get standalone working on another workflow by passing a singularity container to each task command output but I was wondering if there was a more elegant solution I could use such as just changing to a pre-made provider. I have searched Google and through here but not found anything. I did find one issue here but they seemed to want to use docker where as I don't.
Thanks for the help!
`task submit {
String job_id String job_name String cwd String out String err String script String job_shell
command { singularity run -B ${head_directory}:${head_directory} ${singularity_image} /bin/bash ${script} } }
task submit_docker {
String job_id String job_name String cwd String out String err String script String job_shell
String docker_cwd String docker_cid String docker_script String docker_out String docker_err
command {
make sure there is no preexisting Docker CID file
rm -f ${docker_cid}
run as in the original configuration without --rm flag (will remove later)
docker run \ --cidfile ${docker_cid} \ -i \ ${"--user " + docker_user} \ --entrypoint ${job_shell} \ -v ${cwd}:${docker_cwd}:delegated \ ${docker} ${docker_script}
get the return code (working even if the container was detached)
rc=$(docker wait cat ${docker_cid})
remove the container after waiting
docker rm cat ${docker_cid}
return exit code
exit $rc
} }`