ENCODE-DCC / atac-seq-pipeline

ENCODE ATAC-seq pipeline
MIT License
376 stars 170 forks source link

CONDA Pipeline Error #112

Closed lucapinello closed 5 years ago

lucapinello commented 5 years ago

Describe the bug A clear and concise description of what the problem is.

I am trying to use the conda pipeline. After installation I can start to run the provided example until I get this error:

(encode-atac-seq-pipeline) [lp698@ml003 atac-seq-pipeline]$ java -jar -Dconfig.file=backends/backend.conf cromwell-34.jar run atac.wdl -i ${INPUT} -m ${PIPELINE_METADATA} [2019-04-10 17:29:49,64] [info] Running with database db.url = jdbc:hsqldb:mem:68ed60c6-0301-4cad-a9fe-e12e3e428d32;shutdown=false;hsqldb.tx=mvcc [2019-04-10 17:30:00,15] [info] Running migration RenameWorkflowOptionsInMetadata with a read batch size of 100000 and a write batch size of 100000 [2019-04-10 17:30:00,16] [info] [RenameWorkflowOptionsInMetadata] 100% [2019-04-10 17:30:00,24] [info] Running with database db.url = jdbc:hsqldb:mem:13d6b79d-43e9-45ff-9e25-4708c62fe60f;shutdown=false;hsqldb.tx=mvcc [2019-04-10 17:30:00,50] [warn] This actor factory is deprecated. Please use cromwell.backend.google.pipelines.v1alpha2.PipelinesApiLifecycleActorFactory for PAPI v1 or cromwell.backend.google.pipelines.v2alpha1.PipelinesApiLifecycleActorFactory for PAPI v2 [2019-04-10 17:30:00,52] [warn] Couldn't find a suitable DSN, defaulting to a Noop one. [2019-04-10 17:30:00,53] [info] Using noop to send events. [2019-04-10 17:30:00,74] [info] Slf4jLogger started [2019-04-10 17:30:00,89] [info] Workflow heartbeat configuration: { "cromwellId" : "cromid-174f39c", "heartbeatInterval" : "2 minutes", "ttl" : "10 minutes", "writeBatchSize" : 10000, "writeThreshold" : 10000 } [2019-04-10 17:30:00,92] [info] Metadata summary refreshing every 2 seconds. [2019-04-10 17:30:00,96] [info] CallCacheWriteActor configured to flush with batch size 100 and process rate 3 seconds. [2019-04-10 17:30:00,96] [info] KvWriteActor configured to flush with batch size 200 and process rate 5 seconds. [2019-04-10 17:30:00,97] [info] WriteMetadataActor configured to flush with batch size 200 and process rate 5 seconds. [2019-04-10 17:30:01,35] [info] JobExecutionTokenDispenser - Distribution rate: 50 per 1 seconds. [2019-04-10 17:30:01,36] [info] JES batch polling interval is 33333 milliseconds [2019-04-10 17:30:01,36] [info] JES batch polling interval is 33333 milliseconds [2019-04-10 17:30:01,36] [info] JES batch polling interval is 33333 milliseconds [2019-04-10 17:30:01,36] [info] PAPIQueryManager Running with 3 workers [2019-04-10 17:30:01,37] [info] SingleWorkflowRunnerActor: Version 34 [2019-04-10 17:30:01,37] [info] SingleWorkflowRunnerActor: Submitting workflow [2019-04-10 17:30:01,41] [info] Unspecified type (Unspecified version) workflow 5ba44d42-9ba2-427b-9fe6-d0e51ad0594d submitted [2019-04-10 17:30:01,45] [info] SingleWorkflowRunnerActor: Workflow submitted 5ba44d42-9ba2-427b-9fe6-d0e51ad0594d [2019-04-10 17:30:01,45] [info] 1 new workflows fetched [2019-04-10 17:30:01,45] [info] WorkflowManagerActor Starting workflow 5ba44d42-9ba2-427b-9fe6-d0e51ad0594d [2019-04-10 17:30:01,45] [warn] SingleWorkflowRunnerActor: received unexpected message: Done in state RunningSwraData [2019-04-10 17:30:01,45] [info] WorkflowManagerActor Successfully started WorkflowActor-5ba44d42-9ba2-427b-9fe6-d0e51ad0594d [2019-04-10 17:30:01,45] [info] Retrieved 1 workflows from the WorkflowStoreActor [2019-04-10 17:30:01,46] [info] WorkflowStoreHeartbeatWriteActor configured to flush with batch size 10000 and process rate 2 minutes. [2019-04-10 17:30:01,51] [info] MaterializeWorkflowDescriptorActor [5ba44d42]: Parsing workflow as WDL draft-2 [2019-04-10 17:33:54,51] [info] MaterializeWorkflowDescriptorActor [5ba44d42]: Call-to-Backend assignments: atac.macs2_ppr1 -> local, atac.count_signal_track_pooled -> local, atac.macs2_ppr2 -> local, atac.macs2 -> local, atac.idr_pr -> local, atac.reproducibility_idr -> local, atac.overlap_pr -> local, atac.macs2_pooled -> local, atac.idr_ppr -> local, atac.overlap -> local, atac.reproducibility_overlap -> local, atac.macs2_signal_track_pooled -> local, atac.filter -> local, atac.pool_ta_pr1 -> local, atac.pool_ta_pr2 -> local, atac.idr -> local, atac.bam2ta -> local, atac.xcor -> local, atac.macs2_pr1 -> local, atac.pool_ta -> local, atac.read_genome_tsv -> local, atac.qc_report -> local, atac.spr -> local, atac.macs2_pr2 -> local, atac.bowtie2 -> local, atac.macs2_signal_track -> local, atac.overlap_ppr -> local, atac.count_signal_track -> local, atac.ataqc -> local, atac.trim_adapter -> local [2019-04-10 17:33:54,56] [error] Error parsing generated wdl:

java.lang.RuntimeException: Error parsing generated wdl:

    at cromwell.backend.impl.sfs.config.ConfigWdlNamespace.<init>(ConfigWdlNamespace.scala:55)
    at cromwell.backend.impl.sfs.config.ConfigInitializationActor.configWdlNamespace$lzycompute(ConfigInitializationActor.scala:39)
    at cromwell.backend.impl.sfs.config.ConfigInitializationActor.configWdlNamespace(ConfigInitializationActor.scala:39)
    at cromwell.backend.impl.sfs.config.ConfigInitializationActor.declarationValidations$lzycompute(ConfigInitializationActor.scala:42)
    at cromwell.backend.impl.sfs.config.ConfigInitializationActor.declarationValidations(ConfigInitializationActor.scala:41)
    at cromwell.backend.impl.sfs.config.ConfigInitializationActor.runtimeAttributesBuilder$lzycompute(ConfigInitializationActor.scala:53)
    at cromwell.backend.impl.sfs.config.ConfigInitializationActor.runtimeAttributesBuilder(ConfigInitializationActor.scala:52)
    at cromwell.backend.standard.StandardInitializationActor.coerceDefaultRuntimeAttributes(StandardInitializationActor.scala:82)
    at cromwell.backend.BackendWorkflowInitializationActor.initSequence(BackendWorkflowInitializationActor.scala:154)
    at cromwell.backend.BackendWorkflowInitializationActor.initSequence$(BackendWorkflowInitializationActor.scala:152)
    at cromwell.backend.standard.StandardInitializationActor.initSequence(StandardInitializationActor.scala:44)
    at cromwell.backend.BackendWorkflowInitializationActor$$anonfun$receive$1.$anonfun$applyOrElse$1(BackendWorkflowInitializationActor.scala:145)
    at cromwell.backend.BackendLifecycleActor.performActionThenRespond(BackendLifecycleActor.scala:44)
    at cromwell.backend.BackendLifecycleActor.performActionThenRespond$(BackendLifecycleActor.scala:40)
    at cromwell.backend.standard.StandardInitializationActor.performActionThenRespond(StandardInitializationActor.scala:44)
    at cromwell.backend.BackendWorkflowInitializationActor$$anonfun$receive$1.applyOrElse(BackendWorkflowInitializationActor.scala:145)
    at akka.actor.Actor.aroundReceive(Actor.scala:517)
    at akka.actor.Actor.aroundReceive$(Actor.scala:515)
    at cromwell.backend.standard.StandardInitializationActor.aroundReceive(StandardInitializationActor.scala:44)
    at akka.actor.ActorCell.receiveMessage(ActorCell.scala:588)
    at akka.actor.ActorCell.invoke(ActorCell.scala:557)
    at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:258)
    at akka.dispatch.Mailbox.run(Mailbox.scala:225)
    at akka.dispatch.Mailbox.exec(Mailbox.scala:235)
    at akka.dispatch.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
    at akka.dispatch.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
    at akka.dispatch.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
    at akka.dispatch.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)

Caused by: java.lang.NullPointerException: null at wdl.draft2.model.WdlNamespace$.apply(WdlNamespace.scala:196) at wdl.draft2.model.WdlNamespace$.$anonfun$load$1(WdlNamespace.scala:160) at scala.util.Try$.apply(Try.scala:209) at wdl.draft2.model.WdlNamespace$.load(WdlNamespace.scala:160) at wdl.draft2.model.WdlNamespace$.loadUsingSource(WdlNamespace.scala:156) at cromwell.backend.impl.sfs.config.ConfigWdlNamespace.(ConfigWdlNamespace.scala:53) ... 27 common frames omitted

OS/Platform and dependencies

Attach error logs For Cromwell users only. 1) Move to your working directory where you ran a pipeline. You should be able to find a directory named cromwell-executions/ which includes all outputs and logs for debugging.

I don't have this folder

leepc12 commented 5 years ago

Did you download test dataset (and genome data) before running it?

nicolerg commented 5 years ago

@leepc12 this looks like a duplicate of #111. I am still getting the same error on SCG.

leepc12 commented 5 years ago

@nicolerg : Did you try with bash examples/scg/ENCSR356KRQ_subsampled_scg_singularity.sh or sbatch examples/scg/ENCSR356KRQ_subsampled_scg_singularity.sh?

-bash-4.2$ bash examples/scg/ENCSR356KRQ_subsampled_scg_singularity.sh
Picked up _JAVA_OPTIONS: -Xms16M -Xmx2G -XX:ParallelGCThreads=1
[2019-04-11 09:21:33,46] [info] Running with database db.url = jdbc:hsqldb:mem:ff14d08e-8258-4880-9ea3-70f6cc14daa1;shutdown=false;hsqldb.tx=mvcc
[2019-04-11 09:21:44,08] [info] Running migration RenameWorkflowOptionsInMetadata with a read batch size of 100000 and a write batch size of 100000
[2019-04-11 09:21:44,10] [info] [RenameWorkflowOptionsInMetadata] 100%
[2019-04-11 09:21:44,22] [info] Running with database db.url = jdbc:hsqldb:mem:97d4bfee-c426-4fb3-ba60-b2a8e2d2dce3;shutdown=false;hsqldb.tx=mvcc
[2019-04-11 09:21:44,60] [warn] This actor factory is deprecated. Please use cromwell.backend.google.pipelines.v1alpha2.PipelinesApiLifecycleActorFactory for PAPI v1 or cromwell.backend.google.pipelines.v2alpha1.PipelinesApiLifecycleActorFactory for PAPI v2
[2019-04-11 09:21:44,61] [warn] Couldn't find a suitable DSN, defaulting to a Noop one.
[2019-04-11 09:21:44,62] [info] Using noop to send events.
[2019-04-11 09:21:44,94] [info] Slf4jLogger started
[2019-04-11 09:21:45,25] [info] Workflow heartbeat configuration:
{
  "cromwellId" : "cromid-1ef4f54",
  "heartbeatInterval" : "2 minutes",
  "ttl" : "10 minutes",
  "writeBatchSize" : 10000,
  "writeThreshold" : 10000
}
[2019-04-11 09:21:45,30] [info] Metadata summary refreshing every 2 seconds.
[2019-04-11 09:21:45,36] [info] CallCacheWriteActor configured to flush with batch size 100 and process rate 3 seconds.
[2019-04-11 09:21:45,36] [info] WriteMetadataActor configured to flush with batch size 200 and process rate 5 seconds.
[2019-04-11 09:21:45,36] [info] KvWriteActor configured to flush with batch size 200 and process rate 5 seconds.
[2019-04-11 09:21:46,98] [info] JobExecutionTokenDispenser - Distribution rate: 50 per 1 seconds.
[2019-04-11 09:21:47,00] [info] JES batch polling interval is 33333 milliseconds
[2019-04-11 09:21:47,00] [info] JES batch polling interval is 33333 milliseconds
[2019-04-11 09:21:47,00] [info] JES batch polling interval is 33333 milliseconds
[2019-04-11 09:21:47,00] [info] PAPIQueryManager Running with 3 workers
[2019-04-11 09:21:47,01] [info] SingleWorkflowRunnerActor: Version 34
[2019-04-11 09:21:47,02] [info] SingleWorkflowRunnerActor: Submitting workflow
[2019-04-11 09:21:47,07] [info] Unspecified type (Unspecified version) workflow 13ee6456-dc07-47b2-84c0-52df2448b906 submitted
[2019-04-11 09:21:47,12] [info] SingleWorkflowRunnerActor: Workflow submitted 13ee6456-dc07-47b2-84c0-52df2448b906
[2019-04-11 09:21:47,13] [info] 1 new workflows fetched
[2019-04-11 09:21:47,13] [info] WorkflowManagerActor Starting workflow 13ee6456-dc07-47b2-84c0-52df2448b906
[2019-04-11 09:21:47,13] [warn] SingleWorkflowRunnerActor: received unexpected message: Done in state RunningSwraData
[2019-04-11 09:21:47,14] [info] WorkflowManagerActor Successfully started WorkflowActor-13ee6456-dc07-47b2-84c0-52df2448b906
[2019-04-11 09:21:47,14] [info] Retrieved 1 workflows from the WorkflowStoreActor
[2019-04-11 09:21:47,15] [info] WorkflowStoreHeartbeatWriteActor configured to flush with batch size 10000 and process rate 2 minutes.
[2019-04-11 09:21:47,22] [info] MaterializeWorkflowDescriptorActor [13ee6456]: Parsing workflow as WDL draft-2
[2019-04-11 09:23:46,87] [info] MaterializeWorkflowDescriptorActor [13ee6456]: Call-to-Backend assignments: atac.pool_ta -> singularity, atac.macs2_pr2 -> singularity, atac.macs2_ppr1 -> singularity, atac.idr_pr -> singularity, atac.idr_ppr -> singularity, atac.reproducibility_idr -> singularity, atac.overlap_ppr -> singularity, atac.spr -> singularity, atac.bowtie2 -> singularity, atac.count_signal_track_pooled -> singularity, atac.bam2ta -> singularity, atac.overlap -> singularity, atac.trim_adapter -> singularity, atac.xcor -> singularity, atac.pool_ta_pr1 -> singularity, atac.macs2_ppr2 -> singularity, atac.macs2_pooled -> singularity, atac.filter -> singularity, atac.macs2_signal_track_pooled -> singularity, atac.macs2_pr1 -> singularity, atac.count_signal_track -> singularity, atac.pool_ta_pr2 -> singularity, atac.read_genome_tsv -> singularity, atac.ataqc -> singularity, atac.overlap_pr -> singularity, atac.idr -> singularity, atac.qc_report -> singularity, atac.macs2 -> singularity, atac.macs2_signal_track -> singularity, atac.reproducibility_overlap -> singularity
[2019-04-11 09:23:46,99] [warn] singularity [13ee6456]: Key/s [cpu, memory, time, disks] is/are not supported by backend. Unsupported attributes will not be part of job executions.
[2019-04-11 09:23:46,99] [warn] singularity [13ee6456]: Key/s [cpu, memory, time, disks] is/are not supported by backend. Unsupported attributes will not be part of job executions.
[2019-04-11 09:23:46,99] [warn] singularity [13ee6456]: Key/s [cpu, memory, time, disks] is/are not supported by backend. Unsupported attributes will not be part of job executions.
[2019-04-11 09:23:46,99] [warn] singularity [13ee6456]: Key/s [cpu, memory, time, disks] is/are not supported by backend. Unsupported attributes will not be part of job executions.
[2019-04-11 09:23:46,99] [warn] singularity [13ee6456]: Key/s [cpu, memory, time, disks] is/are not supported by backend. Unsupported attributes will not be part of job executions.
[2019-04-11 09:23:46,99] [warn] singularity [13ee6456]: Key/s [cpu, memory, time, disks] is/are not supported by backend. Unsupported attributes will not be part of job executions.
[2019-04-11 09:23:46,99] [warn] singularity [13ee6456]: Key/s [cpu, memory, time, disks] is/are not supported by backend. Unsupported attributes will not be part of job executions.
[2019-04-11 09:23:46,99] [warn] singularity [13ee6456]: Key/s [cpu, memory, time, disks] is/are not supported by backend. Unsupported attributes will not be part of job executions.
[2019-04-11 09:23:46,99] [warn] singularity [13ee6456]: Key/s [preemptible, disks, cpu, time, memory] is/are not supported by backend. Unsupported attributes will not be part of job executions.
[2019-04-11 09:23:47,00] [warn] singularity [13ee6456]: Key/s [cpu, memory, time, disks] is/are not supported by backend. Unsupported attributes will not be part of job executions.
[2019-04-11 09:23:47,00] [warn] singularity [13ee6456]: Key/s [cpu, memory, time, disks] is/are not supported by backend. Unsupported attributes will not be part of job executions.
[2019-04-11 09:23:47,00] [warn] singularity [13ee6456]: Key/s [cpu, memory, time, disks] is/are not supported by backend. Unsupported attributes will not be part of job executions.
[2019-04-11 09:23:47,00] [warn] singularity [13ee6456]: Key/s [cpu, memory, time, disks] is/are not supported by backend. Unsupported attributes will not be part of job executions.
[2019-04-11 09:23:47,00] [warn] singularity [13ee6456]: Key/s [cpu, memory, time, disks] is/are not supported by backend. Unsupported attributes will not be part of job executions.
[2019-04-11 09:23:47,00] [warn] singularity [13ee6456]: Key/s [cpu, memory, time, disks] is/are not supported by backend. Unsupported attributes will not be part of job executions.
[2019-04-11 09:23:47,00] [warn] singularity [13ee6456]: Key/s [cpu, memory, time, disks] is/are not supported by backend. Unsupported attributes will not be part of job executions.
[2019-04-11 09:23:47,00] [warn] singularity [13ee6456]: Key/s [cpu, memory, time, disks] is/are not supported by backend. Unsupported attributes will not be part of job executions.
[2019-04-11 09:23:47,00] [warn] singularity [13ee6456]: Key/s [cpu, memory, time, disks] is/are not supported by backend. Unsupported attributes will not be part of job executions.
[2019-04-11 09:23:47,00] [warn] singularity [13ee6456]: Key/s [cpu, memory, time, disks] is/are not supported by backend. Unsupported attributes will not be part of job executions.
[2019-04-11 09:23:47,00] [warn] singularity [13ee6456]: Key/s [cpu, memory, time, disks] is/are not supported by backend. Unsupported attributes will not be part of job executions.
[2019-04-11 09:23:47,00] [warn] singularity [13ee6456]: Key/s [cpu, memory, time, disks] is/are not supported by backend. Unsupported attributes will not be part of job executions.
[2019-04-11 09:23:47,00] [warn] singularity [13ee6456]: Key/s [cpu, memory, time, disks] is/are not supported by backend. Unsupported attributes will not be part of job executions.
[2019-04-11 09:23:47,01] [warn] singularity [13ee6456]: Key/s [cpu, memory, time, disks] is/are not supported by backend. Unsupported attributes will not be part of job executions.
[2019-04-11 09:23:47,01] [warn] singularity [13ee6456]: Key/s [cpu, memory, time, disks] is/are not supported by backend. Unsupported attributes will not be part of job executions.
[2019-04-11 09:23:47,01] [warn] singularity [13ee6456]: Key/s [cpu, memory, time, disks] is/are not supported by backend. Unsupported attributes will not be part of job executions.
[2019-04-11 09:23:47,01] [warn] singularity [13ee6456]: Key/s [cpu, memory, time, disks] is/are not supported by backend. Unsupported attributes will not be part of job executions.
[2019-04-11 09:23:47,01] [warn] singularity [13ee6456]: Key/s [cpu, memory, time, disks] is/are not supported by backend. Unsupported attributes will not be part of job executions.
[2019-04-11 09:23:47,01] [warn] singularity [13ee6456]: Key/s [cpu, memory, time, disks] is/are not supported by backend. Unsupported attributes will not be part of job executions.
[2019-04-11 09:23:47,01] [warn] singularity [13ee6456]: Key/s [cpu, memory, time, disks] is/are not supported by backend. Unsupported attributes will not be part of job executions.
[2019-04-11 09:23:47,01] [warn] singularity [13ee6456]: Key/s [cpu, memory, time, disks] is/are not supported by backend. Unsupported attributes will not be part of job executions.
[2019-04-11 09:23:49,40] [info] WorkflowExecutionActor-13ee6456-dc07-47b2-84c0-52df2448b906 [13ee6456]: Condition met: 'defined(genome_tsv)'. Running conditional section
[2019-04-11 09:23:51,48] [info] WorkflowExecutionActor-13ee6456-dc07-47b2-84c0-52df2448b906 [13ee6456]: Starting atac.read_genome_tsv
[2019-04-11 09:23:52,15] [warn] BackgroundConfigAsyncJobExecutionActor [13ee6456atac.read_genome_tsv:NA:1]: Unrecognized runtime attribute keys: disks, cpu, time, memory
[2019-04-11 09:23:52,46] [warn] Localization via hard link has failed: /home/leepc12/code/atac-seq-pipeline/cromwell-executions/atac/13ee6456-dc07-47b2-84c0-52df2448b906/call-read_genome_tsv/inputs/-1915540837/hg38_scg.tsv -> /reference/ENCODE/pipeline_genome_data/hg38_scg.tsv: Invalid cross-device link
[2019-04-11 09:23:53,17] [info] BackgroundConfigAsyncJobExecutionActor [13ee6456atac.read_genome_tsv:NA:1]: # create empty files for all entries
touch ref_fa bowtie2_idx_tar chrsz gensz blacklist
touch tss tss_enrich # for backward compatibility
touch dnase prom enh reg2map reg2map_bed roadmap_meta

python <<CODE
import os
with open("/home/leepc12/code/atac-seq-pipeline/cromwell-executions/atac/13ee6456-dc07-47b2-84c0-52df2448b906/call-read_genome_tsv/inputs/-1915540837/hg38_scg.tsv",'r') as fp:
        for line in fp:
                arr = line.strip('\n').split('\t')
                if arr:
                        key, val = arr
                        with open(key,'w') as fp2:
                                fp2.write(val)
CODE
[2019-04-11 09:23:53,29] [info] BackgroundConfigAsyncJobExecutionActor [13ee6456atac.read_genome_tsv:NA:1]: executing: SINGULARITY_BINDPATH=$(echo /home/leepc12/code/atac-seq-pipeline/cromwell-executions/atac/13ee6456-dc07-47b2-84c0-52df2448b906/call-read_genome_tsv | sed 's/cromwell-executions/\n/g' | head -n1)cromwell-executions,/reference/ENCODE,/scratch,/srv/gsfs0,$SINGULARITY_BINDPATH singularity exec --cleanenv --home /home/leepc12/code/atac-seq-pipeline/cromwell-executions/atac/13ee6456-dc07-47b2-84c0-52df2448b906/call-read_genome_tsv  /reference/ENCODE/pipeline_singularity_images/atac-seq-pipeline-v1.1.7.1.simg /bin/bash /home/leepc12/code/atac-seq-pipeline/cromwell-executions/atac/13ee6456-dc07-47b2-84c0-52df2448b906/call-read_genome_tsv/execution/script
[2019-04-11 09:23:55,41] [info] BackgroundConfigAsyncJobExecutionActor [13ee6456atac.read_genome_tsv:NA:1]: job id: 8813
[2019-04-11 09:23:55,41] [info] BackgroundConfigAsyncJobExecutionActor [13ee6456atac.read_genome_tsv:NA:1]: Status change from - to WaitingForReturnCodeFile

@lucapinello : What was your full command line to run it?

nicolerg commented 5 years ago

I am using conda, not singularity. I used the command sbatch ${srcdir}/examples/scg/ENCSR356KRQ_subsampled_scg_conda.sh. Does SCG only work with singularity now?

leepc12 commented 5 years ago

@nicolerg Please cd ${srcdir} and try again. That example .sh must run on the pipeline git directory. No both singularity and Conda work.

nicolerg commented 5 years ago

I edited the version of that script in my fork so that I can call it from any directory. Running from ${srcdir}, I still get the same error. It looks like the original error reported in this issue.

[2019-04-11 10:10:59,07] [info] Running with database db.url = jdbc:hsqldb:mem:964b940b-06ab-4203-aad1-123d43725297;shutdown=false;hsqldb.tx=mvcc
[2019-04-11 10:11:11,02] [info] Running migration RenameWorkflowOptionsInMetadata with a read batch size of 100000 and a write batch size of 100000
[2019-04-11 10:11:11,03] [info] [RenameWorkflowOptionsInMetadata] 100%
[2019-04-11 10:11:11,30] [info] Running with database db.url = jdbc:hsqldb:mem:3d1139ac-5560-4cdc-924d-7a4dc32ca2dc;shutdown=false;hsqldb.tx=mvcc
[2019-04-11 10:11:11,85] [warn] This actor factory is deprecated. Please use cromwell.backend.google.pipelines.v1alpha2.PipelinesApiLifecycleActorFactory for PAPI v1 or cromwell.backend.google.pipelines.v2alpha1.PipelinesApiLifecycleActorFactory for PAPI v2
[2019-04-11 10:11:11,92] [warn] Couldn't find a suitable DSN, defaulting to a Noop one.
[2019-04-11 10:11:11,93] [info] Using noop to send events.
[2019-04-11 10:11:12,53] [info] Slf4jLogger started
[2019-04-11 10:11:12,86] [info] Workflow heartbeat configuration:
{
  "cromwellId" : "cromid-10b87a8",
  "heartbeatInterval" : "2 minutes",
  "ttl" : "10 minutes",
  "writeBatchSize" : 10000,
  "writeThreshold" : 10000
}
[2019-04-11 10:11:13,02] [info] Metadata summary refreshing every 2 seconds.
[2019-04-11 10:11:13,10] [info] KvWriteActor configured to flush with batch size 200 and process rate 5 seconds.
[2019-04-11 10:11:13,14] [info] CallCacheWriteActor configured to flush with batch size 100 and process rate 3 seconds.
[2019-04-11 10:11:13,16] [info] WriteMetadataActor configured to flush with batch size 200 and process rate 5 seconds.
[2019-04-11 10:11:14,50] [info] JobExecutionTokenDispenser - Distribution rate: 50 per 1 seconds.
[2019-04-11 10:11:14,59] [info] SingleWorkflowRunnerActor: Version 34
[2019-04-11 10:11:14,60] [info] SingleWorkflowRunnerActor: Submitting workflow
[2019-04-11 10:11:14,63] [info] PAPIQueryManager Running with 3 workers
[2019-04-11 10:11:14,63] [info] JES batch polling interval is 33333 milliseconds
[2019-04-11 10:11:14,67] [info] JES batch polling interval is 33333 milliseconds
[2019-04-11 10:11:14,68] [info] JES batch polling interval is 33333 milliseconds
[2019-04-11 10:11:14,72] [info] Unspecified type (Unspecified version) workflow 0da2a136-d55b-40db-ba96-4ea36639fee8 submitted
[2019-04-11 10:11:14,86] [info] SingleWorkflowRunnerActor: Workflow submitted 0da2a136-d55b-40db-ba96-4ea36639fee8
[2019-04-11 10:11:14,87] [info] 1 new workflows fetched
[2019-04-11 10:11:14,87] [info] WorkflowManagerActor Starting workflow 0da2a136-d55b-40db-ba96-4ea36639fee8
[2019-04-11 10:11:14,87] [info] WorkflowStoreHeartbeatWriteActor configured to flush with batch size 10000 and process rate 2 minutes.
[2019-04-11 10:11:14,87] [info] WorkflowManagerActor Successfully started WorkflowActor-0da2a136-d55b-40db-ba96-4ea36639fee8
[2019-04-11 10:11:14,87] [info] Retrieved 1 workflows from the WorkflowStoreActor
[2019-04-11 10:11:14,88] [warn] SingleWorkflowRunnerActor: received unexpected message: Done in state RunningSwraData
[2019-04-11 10:11:15,01] [info] MaterializeWorkflowDescriptorActor [0da2a136]: Parsing workflow as WDL draft-2
nicolerg@smsx10srw-srcf-d15-38:/labs/smontgom/shared/atac_encode_pl/atac-seq-pipeline$ cat slurm-8629244.out
[2019-04-11 10:10:59,07] [info] Running with database db.url = jdbc:hsqldb:mem:964b940b-06ab-4203-aad1-123d43725297;shutdown=false;hsqldb.tx=mvcc
[2019-04-11 10:11:11,02] [info] Running migration RenameWorkflowOptionsInMetadata with a read batch size of 100000 and a write batch size of 100000
[2019-04-11 10:11:11,03] [info] [RenameWorkflowOptionsInMetadata] 100%
[2019-04-11 10:11:11,30] [info] Running with database db.url = jdbc:hsqldb:mem:3d1139ac-5560-4cdc-924d-7a4dc32ca2dc;shutdown=false;hsqldb.tx=mvcc
[2019-04-11 10:11:11,85] [warn] This actor factory is deprecated. Please use cromwell.backend.google.pipelines.v1alpha2.PipelinesApiLifecycleActorFactory for PAPI v1 or cromwell.backend.google.pipelines.v2alpha1.PipelinesApiLifecycleActorFactory for PAPI v2
[2019-04-11 10:11:11,92] [warn] Couldn't find a suitable DSN, defaulting to a Noop one.
[2019-04-11 10:11:11,93] [info] Using noop to send events.
[2019-04-11 10:11:12,53] [info] Slf4jLogger started
[2019-04-11 10:11:12,86] [info] Workflow heartbeat configuration:
{
  "cromwellId" : "cromid-10b87a8",
  "heartbeatInterval" : "2 minutes",
  "ttl" : "10 minutes",
  "writeBatchSize" : 10000,
  "writeThreshold" : 10000
}
[2019-04-11 10:11:13,02] [info] Metadata summary refreshing every 2 seconds.
[2019-04-11 10:11:13,10] [info] KvWriteActor configured to flush with batch size 200 and process rate 5 seconds.
[2019-04-11 10:11:13,14] [info] CallCacheWriteActor configured to flush with batch size 100 and process rate 3 seconds.
[2019-04-11 10:11:13,16] [info] WriteMetadataActor configured to flush with batch size 200 and process rate 5 seconds.
[2019-04-11 10:11:14,50] [info] JobExecutionTokenDispenser - Distribution rate: 50 per 1 seconds.
[2019-04-11 10:11:14,59] [info] SingleWorkflowRunnerActor: Version 34
[2019-04-11 10:11:14,60] [info] SingleWorkflowRunnerActor: Submitting workflow
[2019-04-11 10:11:14,63] [info] PAPIQueryManager Running with 3 workers
[2019-04-11 10:11:14,63] [info] JES batch polling interval is 33333 milliseconds
[2019-04-11 10:11:14,67] [info] JES batch polling interval is 33333 milliseconds
[2019-04-11 10:11:14,68] [info] JES batch polling interval is 33333 milliseconds
[2019-04-11 10:11:14,72] [info] Unspecified type (Unspecified version) workflow 0da2a136-d55b-40db-ba96-4ea36639fee8 submitted
[2019-04-11 10:11:14,86] [info] SingleWorkflowRunnerActor: Workflow submitted 0da2a136-d55b-40db-ba96-4ea36639fee8
[2019-04-11 10:11:14,87] [info] 1 new workflows fetched
[2019-04-11 10:11:14,87] [info] WorkflowManagerActor Starting workflow 0da2a136-d55b-40db-ba96-4ea36639fee8
[2019-04-11 10:11:14,87] [info] WorkflowStoreHeartbeatWriteActor configured to flush with batch size 10000 and process rate 2 minutes.
[2019-04-11 10:11:14,87] [info] WorkflowManagerActor Successfully started WorkflowActor-0da2a136-d55b-40db-ba96-4ea36639fee8
[2019-04-11 10:11:14,87] [info] Retrieved 1 workflows from the WorkflowStoreActor
[2019-04-11 10:11:14,88] [warn] SingleWorkflowRunnerActor: received unexpected message: Done in state RunningSwraData
[2019-04-11 10:11:15,01] [info] MaterializeWorkflowDescriptorActor [0da2a136]: Parsing workflow as WDL draft-2
nicolerg@smsx10srw-srcf-d15-38:/labs/smontgom/shared/atac_encode_pl/atac-seq-pipeline$ cat slurm-8629244.out
[2019-04-11 10:10:59,07] [info] Running with database db.url = jdbc:hsqldb:mem:964b940b-06ab-4203-aad1-123d43725297;shutdown=false;hsqldb.tx=mvcc
[2019-04-11 10:11:11,02] [info] Running migration RenameWorkflowOptionsInMetadata with a read batch size of 100000 and a write batch size of 100000
[2019-04-11 10:11:11,03] [info] [RenameWorkflowOptionsInMetadata] 100%
[2019-04-11 10:11:11,30] [info] Running with database db.url = jdbc:hsqldb:mem:3d1139ac-5560-4cdc-924d-7a4dc32ca2dc;shutdown=false;hsqldb.tx=mvcc
[2019-04-11 10:11:11,85] [warn] This actor factory is deprecated. Please use cromwell.backend.google.pipelines.v1alpha2.PipelinesApiLifecycleActorFactory for PAPI v1 or cromwell.backend.google.pipelines.v2alpha1.PipelinesApiLifecycleActorFactory for PAPI v2
[2019-04-11 10:11:11,92] [warn] Couldn't find a suitable DSN, defaulting to a Noop one.
[2019-04-11 10:11:11,93] [info] Using noop to send events.
[2019-04-11 10:11:12,53] [info] Slf4jLogger started
[2019-04-11 10:11:12,86] [info] Workflow heartbeat configuration:
{
  "cromwellId" : "cromid-10b87a8",
  "heartbeatInterval" : "2 minutes",
  "ttl" : "10 minutes",
  "writeBatchSize" : 10000,
  "writeThreshold" : 10000
}
[2019-04-11 10:11:13,02] [info] Metadata summary refreshing every 2 seconds.
[2019-04-11 10:11:13,10] [info] KvWriteActor configured to flush with batch size 200 and process rate 5 seconds.
[2019-04-11 10:11:13,14] [info] CallCacheWriteActor configured to flush with batch size 100 and process rate 3 seconds.
[2019-04-11 10:11:13,16] [info] WriteMetadataActor configured to flush with batch size 200 and process rate 5 seconds.
[2019-04-11 10:11:14,50] [info] JobExecutionTokenDispenser - Distribution rate: 50 per 1 seconds.
[2019-04-11 10:11:14,59] [info] SingleWorkflowRunnerActor: Version 34
[2019-04-11 10:11:14,60] [info] SingleWorkflowRunnerActor: Submitting workflow
[2019-04-11 10:11:14,63] [info] PAPIQueryManager Running with 3 workers
[2019-04-11 10:11:14,63] [info] JES batch polling interval is 33333 milliseconds
[2019-04-11 10:11:14,67] [info] JES batch polling interval is 33333 milliseconds
[2019-04-11 10:11:14,68] [info] JES batch polling interval is 33333 milliseconds
[2019-04-11 10:11:14,72] [info] Unspecified type (Unspecified version) workflow 0da2a136-d55b-40db-ba96-4ea36639fee8 submitted
[2019-04-11 10:11:14,86] [info] SingleWorkflowRunnerActor: Workflow submitted 0da2a136-d55b-40db-ba96-4ea36639fee8
[2019-04-11 10:11:14,87] [info] 1 new workflows fetched
[2019-04-11 10:11:14,87] [info] WorkflowManagerActor Starting workflow 0da2a136-d55b-40db-ba96-4ea36639fee8
[2019-04-11 10:11:14,87] [info] WorkflowStoreHeartbeatWriteActor configured to flush with batch size 10000 and process rate 2 minutes.
[2019-04-11 10:11:14,87] [info] WorkflowManagerActor Successfully started WorkflowActor-0da2a136-d55b-40db-ba96-4ea36639fee8
[2019-04-11 10:11:14,87] [info] Retrieved 1 workflows from the WorkflowStoreActor
[2019-04-11 10:11:14,88] [warn] SingleWorkflowRunnerActor: received unexpected message: Done in state RunningSwraData
[2019-04-11 10:11:15,01] [info] MaterializeWorkflowDescriptorActor [0da2a136]: Parsing workflow as WDL draft-2
nicolerg@smsx10srw-srcf-d15-38:/labs/smontgom/shared/atac_encode_pl/atac-seq-pipeline$ cat slurm-8629244.out
[2019-04-11 10:10:59,07] [info] Running with database db.url = jdbc:hsqldb:mem:964b940b-06ab-4203-aad1-123d43725297;shutdown=false;hsqldb.tx=mvcc
[2019-04-11 10:11:11,02] [info] Running migration RenameWorkflowOptionsInMetadata with a read batch size of 100000 and a write batch size of 100000
[2019-04-11 10:11:11,03] [info] [RenameWorkflowOptionsInMetadata] 100%
[2019-04-11 10:11:11,30] [info] Running with database db.url = jdbc:hsqldb:mem:3d1139ac-5560-4cdc-924d-7a4dc32ca2dc;shutdown=false;hsqldb.tx=mvcc
[2019-04-11 10:11:11,85] [warn] This actor factory is deprecated. Please use cromwell.backend.google.pipelines.v1alpha2.PipelinesApiLifecycleActorFactory for PAPI v1 or cromwell.backend.google.pipelines.v2alpha1.PipelinesApiLifecycleActorFactory for PAPI v2
[2019-04-11 10:11:11,92] [warn] Couldn't find a suitable DSN, defaulting to a Noop one.
[2019-04-11 10:11:11,93] [info] Using noop to send events.
[2019-04-11 10:11:12,53] [info] Slf4jLogger started
[2019-04-11 10:11:12,86] [info] Workflow heartbeat configuration:
{
  "cromwellId" : "cromid-10b87a8",
  "heartbeatInterval" : "2 minutes",
  "ttl" : "10 minutes",
  "writeBatchSize" : 10000,
  "writeThreshold" : 10000
}
[2019-04-11 10:11:13,02] [info] Metadata summary refreshing every 2 seconds.
[2019-04-11 10:11:13,10] [info] KvWriteActor configured to flush with batch size 200 and process rate 5 seconds.
[2019-04-11 10:11:13,14] [info] CallCacheWriteActor configured to flush with batch size 100 and process rate 3 seconds.
[2019-04-11 10:11:13,16] [info] WriteMetadataActor configured to flush with batch size 200 and process rate 5 seconds.
[2019-04-11 10:11:14,50] [info] JobExecutionTokenDispenser - Distribution rate: 50 per 1 seconds.
[2019-04-11 10:11:14,59] [info] SingleWorkflowRunnerActor: Version 34
[2019-04-11 10:11:14,60] [info] SingleWorkflowRunnerActor: Submitting workflow
[2019-04-11 10:11:14,63] [info] PAPIQueryManager Running with 3 workers
[2019-04-11 10:11:14,63] [info] JES batch polling interval is 33333 milliseconds
[2019-04-11 10:11:14,67] [info] JES batch polling interval is 33333 milliseconds
[2019-04-11 10:11:14,68] [info] JES batch polling interval is 33333 milliseconds
[2019-04-11 10:11:14,72] [info] Unspecified type (Unspecified version) workflow 0da2a136-d55b-40db-ba96-4ea36639fee8 submitted
[2019-04-11 10:11:14,86] [info] SingleWorkflowRunnerActor: Workflow submitted 0da2a136-d55b-40db-ba96-4ea36639fee8
[2019-04-11 10:11:14,87] [info] 1 new workflows fetched
[2019-04-11 10:11:14,87] [info] WorkflowManagerActor Starting workflow 0da2a136-d55b-40db-ba96-4ea36639fee8
[2019-04-11 10:11:14,87] [info] WorkflowStoreHeartbeatWriteActor configured to flush with batch size 10000 and process rate 2 minutes.
[2019-04-11 10:11:14,87] [info] WorkflowManagerActor Successfully started WorkflowActor-0da2a136-d55b-40db-ba96-4ea36639fee8
[2019-04-11 10:11:14,87] [info] Retrieved 1 workflows from the WorkflowStoreActor
[2019-04-11 10:11:14,88] [warn] SingleWorkflowRunnerActor: received unexpected message: Done in state RunningSwraData
[2019-04-11 10:11:15,01] [info] MaterializeWorkflowDescriptorActor [0da2a136]: Parsing workflow as WDL draft-2
[2019-04-11 10:14:04,36] [info] MaterializeWorkflowDescriptorActor [0da2a136]: Call-to-Backend assignments: atac.ataqc -> local, atac.macs2_ppr1 -> local, atac.macs2_signal_track -> local, atac.idr_ppr -> local, atac.spr -> local, atac.macs2_pr2 -> local, atac.pool_ta_pr2 -> local, atac.count_signal_track -> local, atac.macs2_pr1 -> local, atac.overlap_ppr -> local, atac.bowtie2 -> local, atac.overlap -> local, atac.idr -> local, atac.count_signal_track_pooled -> local, atac.pool_ta -> local, atac.xcor -> local, atac.trim_adapter -> local, atac.reproducibility_idr -> local, atac.macs2_signal_track_pooled -> local, atac.qc_report -> local, atac.macs2_ppr2 -> local, atac.read_genome_tsv -> local, atac.filter -> local, atac.bam2ta -> local, atac.overlap_pr -> local, atac.idr_pr -> local, atac.macs2 -> local, atac.reproducibility_overlap -> local, atac.pool_ta_pr1 -> local, atac.macs2_pooled -> local
[2019-04-11 10:14:04,49] [error] Error parsing generated wdl:

java.lang.RuntimeException: Error parsing generated wdl:

    at cromwell.backend.impl.sfs.config.ConfigWdlNamespace.<init>(ConfigWdlNamespace.scala:55)
    at cromwell.backend.impl.sfs.config.ConfigInitializationActor.configWdlNamespace$lzycompute(ConfigInitializationActor.scala:39)
    at cromwell.backend.impl.sfs.config.ConfigInitializationActor.configWdlNamespace(ConfigInitializationActor.scala:39)
    at cromwell.backend.impl.sfs.config.ConfigInitializationActor.declarationValidations$lzycompute(ConfigInitializationActor.scala:42)
    at cromwell.backend.impl.sfs.config.ConfigInitializationActor.declarationValidations(ConfigInitializationActor.scala:41)
    at cromwell.backend.impl.sfs.config.ConfigInitializationActor.runtimeAttributesBuilder$lzycompute(ConfigInitializationActor.scala:53)
    at cromwell.backend.impl.sfs.config.ConfigInitializationActor.runtimeAttributesBuilder(ConfigInitializationActor.scala:52)
    at cromwell.backend.standard.StandardInitializationActor.coerceDefaultRuntimeAttributes(StandardInitializationActor.scala:82)
    at cromwell.backend.BackendWorkflowInitializationActor.initSequence(BackendWorkflowInitializationActor.scala:154)
    at cromwell.backend.BackendWorkflowInitializationActor.initSequence$(BackendWorkflowInitializationActor.scala:152)
    at cromwell.backend.standard.StandardInitializationActor.initSequence(StandardInitializationActor.scala:44)
    at cromwell.backend.BackendWorkflowInitializationActor$$anonfun$receive$1.$anonfun$applyOrElse$1(BackendWorkflowInitializationActor.scala:145)
    at cromwell.backend.BackendLifecycleActor.performActionThenRespond(BackendLifecycleActor.scala:44)
    at cromwell.backend.BackendLifecycleActor.performActionThenRespond$(BackendLifecycleActor.scala:40)
    at cromwell.backend.standard.StandardInitializationActor.performActionThenRespond(StandardInitializationActor.scala:44)
    at cromwell.backend.BackendWorkflowInitializationActor$$anonfun$receive$1.applyOrElse(BackendWorkflowInitializationActor.scala:145)
    at akka.actor.Actor.aroundReceive(Actor.scala:517)
    at akka.actor.Actor.aroundReceive$(Actor.scala:515)
    at cromwell.backend.standard.StandardInitializationActor.aroundReceive(StandardInitializationActor.scala:44)
    at akka.actor.ActorCell.receiveMessage(ActorCell.scala:588)
    at akka.actor.ActorCell.invoke(ActorCell.scala:557)
    at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:258)
    at akka.dispatch.Mailbox.run(Mailbox.scala:225)
    at akka.dispatch.Mailbox.exec(Mailbox.scala:235)
    at akka.dispatch.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
    at akka.dispatch.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
    at akka.dispatch.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
    at akka.dispatch.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
Caused by: java.lang.NullPointerException: null
    at wdl.draft2.model.WdlNamespace$.apply(WdlNamespace.scala:196)
    at wdl.draft2.model.WdlNamespace$.$anonfun$load$1(WdlNamespace.scala:160)
    at scala.util.Try$.apply(Try.scala:209)
    at wdl.draft2.model.WdlNamespace$.load(WdlNamespace.scala:160)
    at wdl.draft2.model.WdlNamespace$.loadUsingSource(WdlNamespace.scala:156)
    at cromwell.backend.impl.sfs.config.ConfigWdlNamespace.<init>(ConfigWdlNamespace.scala:53)
    ... 27 common frames omitted
nicolerg commented 5 years ago

As a sidenote, when running this on SCG, the job doesn't fail. It just hangs until I kill it.

leepc12 commented 5 years ago

@nicolerg @lucapinello Please turn on debug mode by adding -DLOG_LEVEL=DEBUG to the command line between java and -jar and upload logs here.

java -DLOG_LEVEL=DEBUG -jar ...
nicolerg commented 5 years ago

stdout: slurm-8629304.out.txt from cromwell-workflow-logs: workflow.f8aa179d-421f-4310-9b0c-6fd4dae4fa90.log

This is where it hangs.

nicolerg commented 5 years ago

I jumped the gun on uploading slurm-8629304.out.txt. The workflow log is in the same state, but slurm-8629304.out.txt has changed. I will upload again in a bit. It's hard to tell when it's "done" because I've had the example hang for ~8 hours without creating a cromwell-executions directory.

nicolerg commented 5 years ago

Actually, it looks like it's stuck in some kind of loop. It keeps outputting the following block with different timestamps and Execution of prepared statement took *µs statements. slurm-8629304.out.txt is now 8.8M of mostly this block continuously repeated.

[2019-04-11 11:19:13,59] [debug] #1: StartTransaction
[2019-04-11 11:19:13,59] [debug] #2: StreamingInvokerAction$HeadOptionAction [select "MAXIMUM_ID" from "SUMMARY_STATUS_ENTRY" where ("SUMMARY_TABLE_NAME" = ?) and ("SUMMARIZED_TABLE_NAME" = ?)]
[2019-04-11 11:19:13,59] [debug] Preparing statement: select "MAXIMUM_ID" from "SUMMARY_STATUS_ENTRY" where ("SUMMARY_TABLE_NAME" = ?) and ("SUMMARIZED_TABLE_NAME" = ?)
[2019-04-11 11:19:13,59] [debug] /---------------------------+----------------\
[2019-04-11 11:19:13,59] [debug] | 1                         | 2              |
[2019-04-11 11:19:13,59] [debug] | String                    | String         |
[2019-04-11 11:19:13,59] [debug] |---------------------------+----------------|
[2019-04-11 11:19:13,59] [debug] | WORKFLOW_METADATA_SUMM... | METADATA_ENTRY |
[2019-04-11 11:19:13,59] [debug] \---------------------------+----------------/
[2019-04-11 11:19:13,59] [debug] Execution of prepared statement took 640µs
[2019-04-11 11:19:13,59] [debug] /------------\
[2019-04-11 11:19:13,59] [debug] | 1          |
[2019-04-11 11:19:13,59] [debug] | MAXIMUM_ID |
[2019-04-11 11:19:13,59] [debug] |------------|
[2019-04-11 11:19:13,59] [debug] | 224        |
[2019-04-11 11:19:13,59] [debug] \------------/
[2019-04-11 11:19:13,59] [debug] #3: success (Some(224),224)
[2019-04-11 11:19:13,59] [debug] #4: result [select "WORKFLOW_EXECUTION_UUID", "CALL_FQN", "JOB_SCATTER_INDEX", "JOB_RETRY_ATTEMPT", "METADATA_KEY", "METADATA_VALUE", "METADATA_VALUE_TYPE", "METADATA_TIMESTAMP", "METADATA_JOURNAL_ID" from "METADATA_ENTRY" where ("METADATA_JOURNAL_ID" >= ?) and ((((((("METADATA_KEY" = ?) or ("METADATA_KEY" = ?)) or ("METADATA_KEY" = ?)) or ("METADATA_KEY" = ?)) or ("METADATA_KEY" like ?)) or ("METADATA_KEY" = ?)) and ((("CALL_FQN" is null) and ("JOB_SCATTER_INDEX" is null)) and ("JOB_RETRY_ATTEMPT" is null)))]
[2019-04-11 11:19:13,59] [debug] Preparing statement: select "WORKFLOW_EXECUTION_UUID", "CALL_FQN", "JOB_SCATTER_INDEX", "JOB_RETRY_ATTEMPT", "METADATA_KEY", "METADATA_VALUE", "METADATA_VALUE_TYPE", "METADATA_TIMESTAMP", "METADATA_JOURNAL_ID" from "METADATA_ENTRY" where ("METADATA_JOURNAL_ID" >= ?) and ((((((("METADATA_KEY" = ?) or ("METADATA_KEY" = ?)) or ("METADATA_KEY" = ?)) or ("METADATA_KEY" = ?)) or ("METADATA_KEY" like ?)) or ("METADATA_KEY" = ?)) and ((("CALL_FQN" is null) and ("JOB_SCATTER_INDEX" is null)) and ("JOB_RETRY_ATTEMPT" is null)))
[2019-04-11 11:19:13,59] [debug] /------+--------+--------+--------------+--------+---------+------------\
[2019-04-11 11:19:13,59] [debug] | 1    | 2      | 3      | 4            | 5      | 6       | 7          |
[2019-04-11 11:19:13,59] [debug] | Long | String | String | String       | String | String  | String     |
[2019-04-11 11:19:13,59] [debug] |------+--------+--------+--------------+--------+---------+------------|
[2019-04-11 11:19:13,59] [debug] | 225  | start  | end    | workflowName | status | labels% | submission |
[2019-04-11 11:19:13,59] [debug] \------+--------+--------+--------------+--------+---------+------------/
[2019-04-11 11:19:13,59] [debug] Execution of prepared statement took 42µs
[2019-04-11 11:19:13,59] [debug] /----------------------+----------+-------------------+-------------------+--------------+----------------+---------------------+--------------------+---------------------\
[2019-04-11 11:19:13,59] [debug] | 1                    | 2        | 3                 | 4                 | 5            | 6              | 7                   | 8                  | 9                   |
[2019-04-11 11:19:13,59] [debug] | WORKFLOW_EXECUTIO... | CALL_FQN | JOB_SCATTER_INDEX | JOB_RETRY_ATTEMPT | METADATA_KEY | METADATA_VALUE | METADATA_VALUE_TYPE | METADATA_TIMESTAMP | METADATA_JOURNAL_ID |
[2019-04-11 11:19:13,59] [debug] |----------------------+----------+-------------------+-------------------+--------------+----------------+---------------------+--------------------+---------------------|
[2019-04-11 11:19:13,59] [debug] \----------------------+----------+-------------------+-------------------+--------------+----------------+---------------------+--------------------+---------------------/
[2019-04-11 11:19:13,59] [debug] #5: success (Vector(),Map(),Vector())
[2019-04-11 11:19:13,59] [debug] #6: success List()
[2019-04-11 11:19:13,59] [debug] #7: success List()
[2019-04-11 11:19:13,59] [debug] #8: success (List(),224)
[2019-04-11 11:19:13,59] [debug] #9: update [update "SUMMARY_STATUS_ENTRY" set "MAXIMUM_ID" = ? where ("SUMMARY_STATUS_ENTRY"."SUMMARY_TABLE_NAME" = ?) and ("SUMMARY_STATUS_ENTRY"."SUMMARIZED_TABLE_NAME" = ?)]
[2019-04-11 11:19:13,59] [debug] Preparing statement: update "SUMMARY_STATUS_ENTRY" set "MAXIMUM_ID" = ? where ("SUMMARY_STATUS_ENTRY"."SUMMARY_TABLE_NAME" = ?) and ("SUMMARY_STATUS_ENTRY"."SUMMARIZED_TABLE_NAME" = ?)
[2019-04-11 11:19:13,60] [debug] /------+---------------------------+----------------\
[2019-04-11 11:19:13,60] [debug] | 1    | 2                         | 3              |
[2019-04-11 11:19:13,60] [debug] | Long | String                    | String         |
[2019-04-11 11:19:13,60] [debug] |------+---------------------------+----------------|
[2019-04-11 11:19:13,60] [debug] | 224  | WORKFLOW_METADATA_SUMM... | METADATA_ENTRY |
[2019-04-11 11:19:13,60] [debug] \------+---------------------------+----------------/
[2019-04-11 11:19:13,60] [debug] Execution of prepared update took 154µs
[2019-04-11 11:19:13,60] [debug] #10: success ()
[2019-04-11 11:19:13,60] [debug] #11: success ()
[2019-04-11 11:19:13,60] [debug] #12: success 224
[2019-04-11 11:19:13,60] [debug] #13: Commit
lucapinello commented 5 years ago

Yes I double checked and both genome and example files are present:

(encode-atac-seq-pipeline) [lp698@ml003 atac-seq-pipeline]$ ls atac.wdl LICENSE backends metadata.json conda README.md cromwell-34.jar src cromwell-workflow-logs test docker_image test_genome_database docs test_genome_database_hg38_atac.tar ENCSR356KRQ_fastq_subsampled.tar test_sample examples utils genome workflow_opts

I am now testing it again enabling the debug option as you have asked.

leepc12 commented 5 years ago

Also, please try with cromwell-38.jar.

nicolerg commented 5 years ago

If I run in debug mode with cromwell-38.jar, the same thing happens with a slightly different repeated text block:

[2019-04-11 17:37:32,45] [debug] #1: SynchronousDatabaseAction.Pin
[2019-04-11 17:37:32,45] [debug] #2: SetTransactionIsolation
[2019-04-11 17:37:32,45] [debug] #3: StartTransaction
[2019-04-11 17:37:32,45] [debug] #4: StreamingInvokerAction$HeadOptionAction [select "MAXIMUM_ID" from "SUMMARY_STATUS_ENTRY" where ("SUMMARY_TABLE_NAME" = ?) and ("SUMMARIZED_TABLE_NAME" = ?)]
[2019-04-11 17:37:32,45] [debug] Preparing statement: select "MAXIMUM_ID" from "SUMMARY_STATUS_ENTRY" where ("SUMMARY_TABLE_NAME" = ?) and ("SUMMARIZED_TABLE_NAME" = ?)
[2019-04-11 17:37:32,45] [debug] /---------------------------+----------------\
[2019-04-11 17:37:32,45] [debug] | 1                         | 2              |
[2019-04-11 17:37:32,45] [debug] | String                    | String         |
[2019-04-11 17:37:32,45] [debug] |---------------------------+----------------|
[2019-04-11 17:37:32,45] [debug] | WORKFLOW_METADATA_SUMM... | METADATA_ENTRY |
[2019-04-11 17:37:32,45] [debug] \---------------------------+----------------/
[2019-04-11 17:37:32,45] [debug] Execution of prepared statement took 85µs
[2019-04-11 17:37:32,45] [debug] /------------\
[2019-04-11 17:37:32,45] [debug] | 1          |
[2019-04-11 17:37:32,45] [debug] | MAXIMUM_ID |
[2019-04-11 17:37:32,45] [debug] |------------|
[2019-04-11 17:37:32,45] [debug] | 226        |
[2019-04-11 17:37:32,45] [debug] \------------/
[2019-04-11 17:37:32,45] [debug] #5: success (Some(226),226)
[2019-04-11 17:37:32,45] [debug] #6: result [select "WORKFLOW_EXECUTION_UUID", "CALL_FQN", "JOB_SCATTER_INDEX", "JOB_RETRY_ATTEMPT", "METADATA_KEY", "METADATA_VALUE", "METADATA_VALUE_TYPE", "METADATA_TIMESTAMP", "METADATA_JOURNAL_ID" from "METADATA_ENTRY" where ("METADATA_JOURNAL_ID" >= ?) and ((((((("METADATA_KEY" = ?) or ("METADATA_KEY" = ?)) or ("METADATA_KEY" = ?)) or ("METADATA_KEY" = ?)) or ("METADATA_KEY" like ?)) or ("METADATA_KEY" = ?)) and ((("CALL_FQN" is null) and ("JOB_SCATTER_INDEX" is null)) and ("JOB_RETRY_ATTEMPT" is null)))]
[2019-04-11 17:37:32,45] [debug] Preparing statement: select "WORKFLOW_EXECUTION_UUID", "CALL_FQN", "JOB_SCATTER_INDEX", "JOB_RETRY_ATTEMPT", "METADATA_KEY", "METADATA_VALUE", "METADATA_VALUE_TYPE", "METADATA_TIMESTAMP", "METADATA_JOURNAL_ID" from "METADATA_ENTRY" where ("METADATA_JOURNAL_ID" >= ?) and ((((((("METADATA_KEY" = ?) or ("METADATA_KEY" = ?)) or ("METADATA_KEY" = ?)) or ("METADATA_KEY" = ?)) or ("METADATA_KEY" like ?)) or ("METADATA_KEY" = ?)) and ((("CALL_FQN" is null) and ("JOB_SCATTER_INDEX" is null)) and ("JOB_RETRY_ATTEMPT" is null)))
[2019-04-11 17:37:32,45] [debug] /------+--------+--------+--------------+--------+---------+------------\
[2019-04-11 17:37:32,45] [debug] | 1    | 2      | 3      | 4            | 5      | 6       | 7          |
[2019-04-11 17:37:32,45] [debug] | Long | String | String | String       | String | String  | String     |
[2019-04-11 17:37:32,45] [debug] |------+--------+--------+--------------+--------+---------+------------|
[2019-04-11 17:37:32,45] [debug] | 227  | start  | end    | workflowName | status | labels% | submission |
[2019-04-11 17:37:32,45] [debug] \------+--------+--------+--------------+--------+---------+------------/
[2019-04-11 17:37:32,45] [debug] Execution of prepared statement took 47µs
[2019-04-11 17:37:32,46] [debug] /----------------------+----------+-------------------+-------------------+--------------+----------------+---------------------+--------------------+---------------------\
[2019-04-11 17:37:32,46] [debug] | 1                    | 2        | 3                 | 4                 | 5            | 6              | 7                   | 8                  | 9                   |
[2019-04-11 17:37:32,46] [debug] | WORKFLOW_EXECUTIO... | CALL_FQN | JOB_SCATTER_INDEX | JOB_RETRY_ATTEMPT | METADATA_KEY | METADATA_VALUE | METADATA_VALUE_TYPE | METADATA_TIMESTAMP | METADATA_JOURNAL_ID |
[2019-04-11 17:37:32,46] [debug] |----------------------+----------+-------------------+-------------------+--------------+----------------+---------------------+--------------------+---------------------|
[2019-04-11 17:37:32,46] [debug] \----------------------+----------+-------------------+-------------------+--------------+----------------+---------------------+--------------------+---------------------/
[2019-04-11 17:37:32,46] [debug] #7: success (Vector(),Map(),Vector())
[2019-04-11 17:37:32,46] [debug] #8: success List()
[2019-04-11 17:37:32,46] [debug] #9: success List()
[2019-04-11 17:37:32,46] [debug] #10: success (List(),226)
[2019-04-11 17:37:32,46] [debug] #11: update [update "SUMMARY_STATUS_ENTRY" set "MAXIMUM_ID" = ? where ("SUMMARY_STATUS_ENTRY"."SUMMARY_TABLE_NAME" = ?) and ("SUMMARY_STATUS_ENTRY"."SUMMARIZED_TABLE_NAME" = ?)]
[2019-04-11 17:37:32,46] [debug] Preparing statement: update "SUMMARY_STATUS_ENTRY" set "MAXIMUM_ID" = ? where ("SUMMARY_STATUS_ENTRY"."SUMMARY_TABLE_NAME" = ?) and ("SUMMARY_STATUS_ENTRY"."SUMMARIZED_TABLE_NAME" = ?)
[2019-04-11 17:37:32,46] [debug] /------+---------------------------+----------------\
[2019-04-11 17:37:32,46] [debug] | 1    | 2                         | 3              |
[2019-04-11 17:37:32,46] [debug] | Long | String                    | String         |
[2019-04-11 17:37:32,46] [debug] |------+---------------------------+----------------|
[2019-04-11 17:37:32,46] [debug] | 226  | WORKFLOW_METADATA_SUMM... | METADATA_ENTRY |
[2019-04-11 17:37:32,46] [debug] \------+---------------------------+----------------/
[2019-04-11 17:37:32,46] [debug] Execution of prepared update took 208µs
[2019-04-11 17:37:32,46] [debug] #12: success ()
[2019-04-11 17:37:32,46] [debug] #13: success ()
[2019-04-11 17:37:32,46] [debug] #14: success 226
[2019-04-11 17:37:32,46] [debug] #15: Commit
[2019-04-11 17:37:32,46] [debug] #16: SetTransactionIsolation
[2019-04-11 17:37:32,46] [debug] #17: SynchronousDatabaseAction.Unpin
lucapinello commented 5 years ago

Just tried with version 38, now I get this error:

Parsing workflow as WDL draft-2 [2019-04-12 08:33:29,34] [info] MaterializeWorkflowDescriptorActor [04c6402b]: Call-to-Backend assignments: atac.macs2_pooled -> local, atac.pool_ta_pr1 -> lo cal, atac.xcor -> local, atac.count_signal_track -> local, atac.idr_pr -> local , atac.read_genome_tsv -> local, atac.spr -> local, atac.overlap_pr -> local, a tac.idr -> local, atac.trim_adapter -> local, atac.macs2_ppr2 -> local, atac.re producibility_idr -> local, atac.filter -> local, atac.bowtie2 -> local, atac.i dr_ppr -> local, atac.macs2_ppr1 -> local, atac.macs2_pr1 -> local, atac.pool_t a -> local, atac.macs2_signal_track_pooled -> local, atac.macs2_pr2 -> local, a tac.ataqc -> local, atac.macs2_signal_track -> local, atac.macs2 -> local, atac .qc_report -> local, atac.count_signal_track_pooled -> local, atac.reproducibil ity_overlap -> local, atac.overlap_ppr -> local, atac.overlap -> local, atac.po ol_ta_pr2 -> local, atac.bam2ta -> local [2019-04-12 08:33:29,39] [error] Error parsing generated wdl:

java.lang.RuntimeException: Error parsing generated wdl:

    at cromwell.backend.impl.sfs.config.ConfigWdlNamespace.<init>(ConfigWdl                                                                                                                           Namespace.scala:55)
    at cromwell.backend.impl.sfs.config.ConfigInitializationActor.configWdl                                                                                                                           Namespace$lzycompute(ConfigInitializationActor.scala:39)
    at cromwell.backend.impl.sfs.config.ConfigInitializationActor.configWdl                                                                                                                           Namespace(ConfigInitializationActor.scala:39)
    at cromwell.backend.impl.sfs.config.ConfigInitializationActor.declarati                                                                                                                           onValidations$lzycompute(ConfigInitializationActor.scala:42)
    at cromwell.backend.impl.sfs.config.ConfigInitializationActor.declarati                                                                                                                           onValidations(ConfigInitializationActor.scala:41)
    at cromwell.backend.impl.sfs.config.ConfigInitializationActor.runtimeAt                                                                                                                           tributesBuilder$lzycompute(ConfigInitializationActor.scala:53)
    at cromwell.backend.impl.sfs.config.ConfigInitializationActor.runtimeAt                                                                                                                           tributesBuilder(ConfigInitializationActor.scala:52)
    at cromwell.backend.standard.StandardInitializationActor.coerceDefaultR                                                                                                                           untimeAttributes(StandardInitializationActor.scala:82)
    at cromwell.backend.BackendWorkflowInitializationActor.initSequence(Bac                                                                                                                           kendWorkflowInitializationActor.scala:155)
    at cromwell.backend.BackendWorkflowInitializationActor.initSequence$(Ba                                                                                                                           ckendWorkflowInitializationActor.scala:153)
    at cromwell.backend.standard.StandardInitializationActor.initSequence(S                                                                                                                           tandardInitializationActor.scala:44)
    at cromwell.backend.BackendWorkflowInitializationActor$$anonfun$receive                                                                                                                           $1.$anonfun$applyOrElse$1(BackendWorkflowInitializationActor.scala:146)
    at cromwell.backend.BackendLifecycleActor.performActionThenRespond(Back                                                                                                                           endLifecycleActor.scala:44)
    at cromwell.backend.BackendLifecycleActor.performActionThenRespond$(Bac                                                                                                                           kendLifecycleActor.scala:40)
    at cromwell.backend.standard.StandardInitializationActor.performActionT                                                                                                                           henRespond(StandardInitializationActor.scala:44)
    at cromwell.backend.BackendWorkflowInitializationActor$$anonfun$receive                                                                                                                           $1.applyOrElse(BackendWorkflowInitializationActor.scala:146)
    at akka.actor.Actor.aroundReceive(Actor.scala:517)
    at akka.actor.Actor.aroundReceive$(Actor.scala:515)
    at cromwell.backend.standard.StandardInitializationActor.aroundReceive(                                                                                                                           StandardInitializationActor.scala:44)
    at akka.actor.ActorCell.receiveMessage(ActorCell.scala:588)
    at akka.actor.ActorCell.invoke(ActorCell.scala:557)
    at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:258)
    at akka.dispatch.Mailbox.run(Mailbox.scala:225)
    at akka.dispatch.Mailbox.exec(Mailbox.scala:235)
    at akka.dispatch.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
    at akka.dispatch.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.j                                                                                                                           ava:1339)
    at akka.dispatch.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979                                                                                                                           )
    at akka.dispatch.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread                                                                                                                           .java:107)

Caused by: java.lang.IllegalArgumentException: Could not build AST from workflo w source. Source is empty or contains only comments and whitespace. at wdl.draft2.model.WdlNamespace$.$anonfun$load$1(WdlNamespace.scala:16 6) at scala.util.Try$.apply(Try.scala:209) at wdl.draft2.model.WdlNamespace$.load(WdlNamespace.scala:159) at wdl.draft2.model.WdlNamespace$.loadUsingSource(WdlNamespace.scala:15 6) at cromwell.backend.impl.sfs.config.ConfigWdlNamespace.(ConfigWdl Namespace.scala:53) ... 27 common frames omitted [2019-04-12 08:33:31,40] [info] Not triggering log of token queue status. Effec tive log interval = None

nicolerg commented 5 years ago

@leepc12 I uninstalled and reinstalled all of the dependencies and tried re-running the SCG conda example with cromwell-38, and I'm still getting the same error.

leepc12 commented 5 years ago

What is your _JAVA_OPTIONS?

-bash-4.2$ java -version
Picked up _JAVA_OPTIONS: -Xms16M -Xmx2G -XX:ParallelGCThreads=1
java version "1.8.0_66"
Java(TM) SE Runtime Environment (build 1.8.0_66-b17)
Java HotSpot(TM) 64-Bit Server VM (build 25.66-b17, mixed mode)

Please increased JAVA heap and try without inputs and options JSONs.

$ export _JAVA_OPTIONS="-Xms16M -Xmx2G -XX:ParallelGCThreads=1"
$ java -jar ~/cromwell-34.jar run atac.wdl

You should end up with the following error message. I tried this on SCG.

-bash-4.2$ java -jar ~/cromwell-34.jar run atac.wdl

Picked up _JAVA_OPTIONS: -Xms16M -Xmx2G -XX:ParallelGCThreads=1
[2019-04-12 14:43:13,84] [info] Running with database db.url = jdbc:hsqldb:mem:284fd78b-9481-46e2-af3a-dd86e0c3e310;shutdown=false;hsqldb.tx=mvcc
[2019-04-12 14:43:25,34] [info] Running migration RenameWorkflowOptionsInMetadata with a read batch size of 100000 and a write batch size of 100000
[2019-04-12 14:43:25,35] [info] [RenameWorkflowOptionsInMetadata] 100%
[2019-04-12 14:43:25,46] [info] Running with database db.url = jdbc:hsqldb:mem:deffa4ed-1849-43e3-81b7-e3284813c519;shutdown=false;hsqldb.tx=mvcc
[2019-04-12 14:43:25,99] [info] Slf4jLogger started
[2019-04-12 14:43:26,41] [info] Workflow heartbeat configuration:
{
  "cromwellId" : "cromid-c030d02",
  "heartbeatInterval" : "2 minutes",
  "ttl" : "10 minutes",
  "writeBatchSize" : 10000,
  "writeThreshold" : 10000
}
[2019-04-12 14:43:26,48] [info] Metadata summary refreshing every 2 seconds.
[2019-04-12 14:43:26,53] [info] KvWriteActor configured to flush with batch size 200 and process rate 5 seconds.
[2019-04-12 14:43:26,53] [info] WriteMetadataActor configured to flush with batch size 200 and process rate 5 seconds.
[2019-04-12 14:43:26,53] [info] CallCacheWriteActor configured to flush with batch size 100 and process rate 3 seconds.
[2019-04-12 14:43:27,92] [info] JobExecutionTokenDispenser - Distribution rate: 50 per 1 seconds.
[2019-04-12 14:43:27,93] [info] SingleWorkflowRunnerActor: Version 34
[2019-04-12 14:43:27,94] [info] SingleWorkflowRunnerActor: Submitting workflow
[2019-04-12 14:43:28,01] [info] Unspecified type (Unspecified version) workflow 37ede804-7bad-4888-80c3-6643ea2bd7e5 submitted
[2019-04-12 14:43:28,11] [info] SingleWorkflowRunnerActor: Workflow submitted 37ede804-7bad-4888-80c3-6643ea2bd7e5
[2019-04-12 14:43:28,12] [info] 1 new workflows fetched
[2019-04-12 14:43:28,12] [info] WorkflowManagerActor Starting workflow 37ede804-7bad-4888-80c3-6643ea2bd7e5
[2019-04-12 14:43:28,12] [warn] SingleWorkflowRunnerActor: received unexpected message: Done in state RunningSwraData
[2019-04-12 14:43:28,17] [warn] Couldn't find a suitable DSN, defaulting to a Noop one.
[2019-04-12 14:43:28,18] [info] Using noop to send events.
[2019-04-12 14:43:28,20] [info] WorkflowManagerActor Successfully started WorkflowActor-37ede804-7bad-4888-80c3-6643ea2bd7e5
[2019-04-12 14:43:28,20] [info] Retrieved 1 workflows from the WorkflowStoreActor
[2019-04-12 14:43:28,20] [info] WorkflowStoreHeartbeatWriteActor configured to flush with batch size 10000 and process rate 2 minutes.
[2019-04-12 14:43:28,20] [info] MaterializeWorkflowDescriptorActor [37ede804]: Parsing workflow as WDL draft-2
[2019-04-12 14:46:00,76] [info] MaterializeWorkflowDescriptorActor [37ede804]: Call-to-Backend assignments: atac.qc_report -> Local, atac.filter -> Local, atac.ataqc -> Local, atac.overlap_pr -> Local, atac.macs2_pr1 -> Local, atac.pool_ta -> Local, atac.reproducibility_idr -> Local, atac.idr_pr -> Local, atac.xcor -> Local, atac.macs2_signal_track_pooled -> Local, atac.idr -> Local, atac.macs2_pooled -> Local, atac.spr -> Local, atac.pool_ta_pr2 -> Local, atac.macs2_ppr1 -> Local, atac.pool_ta_pr1 -> Local, atac.bam2ta -> Local, atac.macs2_signal_track -> Local, atac.macs2_ppr2 -> Local, atac.count_signal_track_pooled -> Local, atac.macs2 -> Local, atac.overlap_ppr -> Local, atac.reproducibility_overlap -> Local, atac.macs2_pr2 -> Local, atac.idr_ppr -> Local, atac.overlap -> Local, atac.trim_adapter -> Local, atac.count_signal_track -> Local, atac.read_genome_tsv -> Local, atac.bowtie2 -> Local
[2019-04-12 14:46:00,87] [warn] Local [37ede804]: Key/s [cpu, memory, time, disks] is/are not supported by backend. Unsupported attributes will not be part of job executions.
[2019-04-12 14:46:00,88] [warn] Local [37ede804]: Key/s [cpu, memory, time, disks] is/are not supported by backend. Unsupported attributes will not be part of job executions.
[2019-04-12 14:46:00,88] [warn] Local [37ede804]: Key/s [cpu, memory, time, disks] is/are not supported by backend. Unsupported attributes will not be part of job executions.
[2019-04-12 14:46:00,88] [warn] Local [37ede804]: Key/s [cpu, memory, time, disks] is/are not supported by backend. Unsupported attributes will not be part of job executions.
[2019-04-12 14:46:00,88] [warn] Local [37ede804]: Key/s [cpu, memory, time, disks] is/are not supported by backend. Unsupported attributes will not be part of job executions.
[2019-04-12 14:46:00,88] [warn] Local [37ede804]: Key/s [cpu, memory, time, disks] is/are not supported by backend. Unsupported attributes will not be part of job executions.
[2019-04-12 14:46:00,88] [warn] Local [37ede804]: Key/s [cpu, memory, time, disks] is/are not supported by backend. Unsupported attributes will not be part of job executions.
[2019-04-12 14:46:00,88] [warn] Local [37ede804]: Key/s [cpu, memory, time, disks] is/are not supported by backend. Unsupported attributes will not be part of job executions.
[2019-04-12 14:46:00,88] [warn] Local [37ede804]: Key/s [cpu, memory, time, disks] is/are not supported by backend. Unsupported attributes will not be part of job executions.
[2019-04-12 14:46:00,88] [warn] Local [37ede804]: Key/s [cpu, memory, time, disks] is/are not supported by backend. Unsupported attributes will not be part of job executions.
[2019-04-12 14:46:00,89] [warn] Local [37ede804]: Key/s [cpu, memory, time, disks] is/are not supported by backend. Unsupported attributes will not be part of job executions.
[2019-04-12 14:46:00,89] [warn] Local [37ede804]: Key/s [cpu, memory, time, disks] is/are not supported by backend. Unsupported attributes will not be part of job executions.
[2019-04-12 14:46:00,89] [warn] Local [37ede804]: Key/s [cpu, memory, time, disks] is/are not supported by backend. Unsupported attributes will not be part of job executions.
[2019-04-12 14:46:00,89] [warn] Local [37ede804]: Key/s [cpu, memory, time, disks] is/are not supported by backend. Unsupported attributes will not be part of job executions.
[2019-04-12 14:46:00,89] [warn] Local [37ede804]: Key/s [cpu, memory, time, disks] is/are not supported by backend. Unsupported attributes will not be part of job executions.
[2019-04-12 14:46:00,89] [warn] Local [37ede804]: Key/s [cpu, memory, time, disks] is/are not supported by backend. Unsupported attributes will not be part of job executions.
[2019-04-12 14:46:00,89] [warn] Local [37ede804]: Key/s [cpu, memory, time, disks] is/are not supported by backend. Unsupported attributes will not be part of job executions.
[2019-04-12 14:46:00,89] [warn] Local [37ede804]: Key/s [cpu, memory, time, disks] is/are not supported by backend. Unsupported attributes will not be part of job executions.
[2019-04-12 14:46:00,89] [warn] Local [37ede804]: Key/s [cpu, memory, time, disks] is/are not supported by backend. Unsupported attributes will not be part of job executions.
[2019-04-12 14:46:00,89] [warn] Local [37ede804]: Key/s [cpu, memory, time, disks] is/are not supported by backend. Unsupported attributes will not be part of job executions.
[2019-04-12 14:46:00,89] [warn] Local [37ede804]: Key/s [cpu, memory, time, disks] is/are not supported by backend. Unsupported attributes will not be part of job executions.
[2019-04-12 14:46:00,89] [warn] Local [37ede804]: Key/s [cpu, memory, time, disks] is/are not supported by backend. Unsupported attributes will not be part of job executions.
[2019-04-12 14:46:00,89] [warn] Local [37ede804]: Key/s [cpu, memory, time, disks] is/are not supported by backend. Unsupported attributes will not be part of job executions.
[2019-04-12 14:46:00,89] [warn] Local [37ede804]: Key/s [cpu, memory, time, disks] is/are not supported by backend. Unsupported attributes will not be part of job executions.
[2019-04-12 14:46:00,89] [warn] Local [37ede804]: Key/s [cpu, memory, time, disks] is/are not supported by backend. Unsupported attributes will not be part of job executions.
[2019-04-12 14:46:00,89] [warn] Local [37ede804]: Key/s [cpu, memory, time, disks] is/are not supported by backend. Unsupported attributes will not be part of job executions.
[2019-04-12 14:46:00,89] [warn] Local [37ede804]: Key/s [cpu, memory, time, disks] is/are not supported by backend. Unsupported attributes will not be part of job executions.
[2019-04-12 14:46:00,89] [warn] Local [37ede804]: Key/s [cpu, memory, time, disks] is/are not supported by backend. Unsupported attributes will not be part of job executions.
[2019-04-12 14:46:00,89] [warn] Local [37ede804]: Key/s [cpu, memory, time, disks] is/are not supported by backend. Unsupported attributes will not be part of job executions.
[2019-04-12 14:46:00,89] [warn] Local [37ede804]: Key/s [preemptible, disks, cpu, time, memory] is/are not supported by backend. Unsupported attributes will not be part of job executions.
[2019-04-12 14:46:03,21] [info] WorkflowExecutionActor-37ede804-7bad-4888-80c3-6643ea2bd7e5 [37ede804]: Condition NOT met: 'defined(genome_tsv)'. Bypassing conditional section
[2019-04-12 14:46:15,55] [info] WorkflowExecutionActor-37ede804-7bad-4888-80c3-6643ea2bd7e5 [37ede804]: Condition met: 'has_input_of_reproducibility_overlap && !has_output_of_reproducibility_overlap && !align_only && !true_rep_only'. Running conditional section
[2019-04-12 14:46:15,55] [info] WorkflowExecutionActor-37ede804-7bad-4888-80c3-6643ea2bd7e5 [37ede804]: Condition NOT met: 'has_all_inputs_of_pool_ta_pr1 && !has_output_of_pool_ta_pr1 && !align_only && !true_rep_only && num_rep > 1'. Bypassing conditional section
[2019-04-12 14:46:15,55] [info] WorkflowExecutionActor-37ede804-7bad-4888-80c3-6643ea2bd7e5 [37ede804]: Condition NOT met: 'has_all_inputs_of_pool_ta_pr1 && !has_output_of_pool_ta_pr1 && !align_only && !true_rep_only && num_rep > 1'. Bypassing conditional section
[2019-04-12 14:46:15,55] [info] WorkflowExecutionActor-37ede804-7bad-4888-80c3-6643ea2bd7e5 [37ede804]: Condition NOT met: 'has_all_inputs_of_pool_ta && !has_output_of_pool_ta && num_rep > 1'. Bypassing conditional section
[2019-04-12 14:46:15,55] [info] WorkflowExecutionActor-37ede804-7bad-4888-80c3-6643ea2bd7e5 [37ede804]: Condition NOT met: 'has_input_of_reproducibility_idr && !has_output_of_reproducibility_idr && !align_only && !true_rep_only && enable_idr'. Bypassing conditional section
[2019-04-12 14:46:18,62] [info] WorkflowExecutionActor-37ede804-7bad-4888-80c3-6643ea2bd7e5 [37ede804]: Condition NOT met: 'has_input_of_macs2_pooled && !has_output_of_macs2_pooled && !align_only && num_rep > 1'. Bypassing conditional section
[2019-04-12 14:46:18,63] [info] WorkflowExecutionActor-37ede804-7bad-4888-80c3-6643ea2bd7e5 [37ede804]: Condition NOT met: 'has_input_of_count_signal_track_pooled && !has_output_of_count_signal_track_pooled && enable_count_signal_track && num_rep > 1'. Bypassing conditional section
[2019-04-12 14:46:18,63] [info] WorkflowExecutionActor-37ede804-7bad-4888-80c3-6643ea2bd7e5 [37ede804]: Condition NOT met: 'has_input_of_macs2_signal_track_pooled && !has_output_of_macs2_signal_track_pooled && num_rep > 1'. Bypassing conditional section
[2019-04-12 14:46:18,63] [info] WorkflowExecutionActor-37ede804-7bad-4888-80c3-6643ea2bd7e5 [37ede804]: Condition NOT met: 'has_input_of_macs2_ppr1 && !has_output_of_macs2_ppr1 && !align_only && !true_rep_only && num_rep > 1'. Bypassing conditional section
[2019-04-12 14:46:18,63] [info] WorkflowExecutionActor-37ede804-7bad-4888-80c3-6643ea2bd7e5 [37ede804]: Condition NOT met: 'has_input_of_macs2_ppr2 && !has_output_of_macs2_ppr2 && !align_only && !true_rep_only && num_rep > 1'. Bypassing conditional section
[2019-04-12 14:46:21,69] [info] WorkflowExecutionActor-37ede804-7bad-4888-80c3-6643ea2bd7e5 [37ede804]: Condition NOT met: 'has_input_of_overlap_ppr && !has_output_of_overlap_ppr && !align_only && !true_rep_only && num_rep > 1'. Bypassing conditional section
[2019-04-12 14:46:21,69] [info] WorkflowExecutionActor-37ede804-7bad-4888-80c3-6643ea2bd7e5 [37ede804]: Condition NOT met: 'has_input_of_idr_ppr && !has_output_of_idr_ppr && !align_only && !true_rep_only && num_rep > 1'. Bypassing conditional section
[2019-04-12 14:46:24,80] [info] WorkflowExecutionActor-37ede804-7bad-4888-80c3-6643ea2bd7e5 [37ede804]: Starting atac.reproducibility_overlap
[2019-04-12 14:46:25,89] [error] WorkflowManagerActor Workflow 37ede804-7bad-4888-80c3-6643ea2bd7e5 failed (during ExecutingWorkflowState): cromwell.engine.workflow.lifecycle.execution.job.preparation.JobPreparationActor$$anonfun$1$$anon$1: Call input and runtime attributes evaluation failed for reproducibility_overlap:
Failed to lookup input value for required input chrsz
        at cromwell.engine.workflow.lifecycle.execution.job.preparation.JobPreparationActor$$anonfun$1.applyOrElse(JobPreparationActor.scala:65)
        at cromwell.engine.workflow.lifecycle.execution.job.preparation.JobPreparationActor$$anonfun$1.applyOrElse(JobPreparationActor.scala:61)
        at scala.runtime.AbstractPartialFunction.apply(AbstractPartialFunction.scala:34)
        at akka.actor.FSM.processEvent(FSM.scala:670)
        at akka.actor.FSM.processEvent$(FSM.scala:667)
        at cromwell.engine.workflow.lifecycle.execution.job.preparation.JobPreparationActor.processEvent(JobPreparationActor.scala:39)
        at akka.actor.FSM.akka$actor$FSM$$processMsg(FSM.scala:664)
        at akka.actor.FSM$$anonfun$receive$1.applyOrElse(FSM.scala:658)
        at akka.actor.Actor.aroundReceive(Actor.scala:517)
        at akka.actor.Actor.aroundReceive$(Actor.scala:515)
        at cromwell.engine.workflow.lifecycle.execution.job.preparation.JobPreparationActor.aroundReceive(JobPreparationActor.scala:39)
        at akka.actor.ActorCell.receiveMessage(ActorCell.scala:588)
        at akka.actor.ActorCell.invoke(ActorCell.scala:557)
        at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:258)
        at akka.dispatch.Mailbox.run(Mailbox.scala:225)
        at akka.dispatch.Mailbox.exec(Mailbox.scala:235)
        at akka.dispatch.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
        at akka.dispatch.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
        at akka.dispatch.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
        at akka.dispatch.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)

[2019-04-12 14:46:25,90] [info] WorkflowManagerActor WorkflowActor-37ede804-7bad-4888-80c3-6643ea2bd7e5 is in a terminal state: WorkflowFailedState
[2019-04-12 14:46:52,40] [info] SingleWorkflowRunnerActor workflow finished with status 'Failed'.
[2019-04-12 14:46:56,55] [info] Workflow polling stopped
[2019-04-12 14:46:56,57] [info] Shutting down WorkflowStoreActor - Timeout = 5 seconds
[2019-04-12 14:46:56,58] [info] Shutting down WorkflowLogCopyRouter - Timeout = 5 seconds
[2019-04-12 14:46:56,58] [info] Aborting all running workflows.
[2019-04-12 14:46:56,58] [info] Shutting down JobExecutionTokenDispenser - Timeout = 5 seconds
[2019-04-12 14:46:56,58] [info] WorkflowStoreActor stopped
[2019-04-12 14:46:56,59] [info] JobExecutionTokenDispenser stopped
[2019-04-12 14:46:56,59] [info] WorkflowLogCopyRouter stopped
[2019-04-12 14:46:56,59] [info] Shutting down WorkflowManagerActor - Timeout = 3600 seconds
[2019-04-12 14:46:56,59] [info] WorkflowManagerActor All workflows finished
[2019-04-12 14:46:56,59] [info] WorkflowManagerActor stopped
[2019-04-12 14:46:56,59] [info] Connection pools shut down
[2019-04-12 14:46:56,59] [info] Shutting down SubWorkflowStoreActor - Timeout = 1800 seconds
[2019-04-12 14:46:56,59] [info] Shutting down JobStoreActor - Timeout = 1800 seconds
[2019-04-12 14:46:56,59] [info] Shutting down CallCacheWriteActor - Timeout = 1800 seconds
[2019-04-12 14:46:56,59] [info] SubWorkflowStoreActor stopped
[2019-04-12 14:46:56,59] [info] Shutting down ServiceRegistryActor - Timeout = 1800 seconds
[2019-04-12 14:46:56,59] [info] Shutting down DockerHashActor - Timeout = 1800 seconds
[2019-04-12 14:46:56,59] [info] CallCacheWriteActor Shutting down: 0 queued messages to process
[2019-04-12 14:46:56,59] [info] Shutting down IoProxy - Timeout = 1800 seconds
[2019-04-12 14:46:56,60] [info] JobStoreActor stopped
[2019-04-12 14:46:56,60] [info] CallCacheWriteActor stopped
[2019-04-12 14:46:56,60] [info] DockerHashActor stopped
[2019-04-12 14:46:56,60] [info] WriteMetadataActor Shutting down: 0 queued messages to process
[2019-04-12 14:46:56,60] [info] KvWriteActor Shutting down: 0 queued messages to process
[2019-04-12 14:46:56,60] [info] IoProxy stopped
[2019-04-12 14:46:56,60] [info] ServiceRegistryActor stopped
[2019-04-12 14:46:56,62] [info] Database closed
[2019-04-12 14:46:56,62] [info] Stream materializer shut down
Workflow 37ede804-7bad-4888-80c3-6643ea2bd7e5 transitioned to state Failed
[2019-04-12 14:46:56,67] [info] Automatic shutdown of the async connection
[2019-04-12 14:46:56,67] [info] Gracefully shutdown sentry threads.
[2019-04-12 14:46:56,67] [info] Shutdown finished.
lucapinello commented 5 years ago

Thanks for helping!

This is what I see: (encode-atac-seq-pipeline) [lp698@ml003 atac-seq-pipeline]$ java -version openjdk version "11.0.1" 2018-10-16 LTS OpenJDK Runtime Environment Zulu11.2+3 (build 11.0.1+13-LTS) OpenJDK 64-Bit Server VM Zulu11.2+3 (build 11.0.1+13-LTS, mixed mode) (encode-atac-seq-pipeline) [lp698@ml003 atac-seq-pipeline]$ which java /data/pinello/SHARED_SOFTWARE/anaconda3/envs/encode-atac-seq-pipeline/bin/java (encode-atac-seq-pipeline) [lp698@ml003 atac-seq-pipeline]$ export _JAVA_OPTIONS="-Xms16M -Xmx2G -XX:ParallelGCThreads=1" (encode-atac-seq-pipeline) [lp698@ml003 atac-seq-pipeline]$ java -jar ~/cromwell-34.jar run atac.wdl Error: Invalid or corrupt jarfile /PHShome/lp698/cromwell-34.jar (encode-atac-seq-pipeline) [lp698@ml003 atac-seq-pipeline]$ cp cromwell-34.jar ~ (encode-atac-seq-pipeline) [lp698@ml003 atac-seq-pipeline]$ java -jar ~/cromwell-34.jar run atac.wdl Picked up _JAVA_OPTIONS: -Xms16M -Xmx2G -XX:ParallelGCThreads=1 [2019-04-12 18:11:28,08] [info] Running with database db.url = jdbc:hsqldb:mem:a72f6d0b-1227-4e64-805d-a4300ed3457d;shutdown=false;hsqldb.tx=mvcc [2019-04-12 18:11:39,08] [info] Running migration RenameWorkflowOptionsInMetadata with a read batch size of 100000 and a write batch size of 100000 [2019-04-12 18:11:39,09] [info] [RenameWorkflowOptionsInMetadata] 100% [2019-04-12 18:11:39,18] [info] Running with database db.url = jdbc:hsqldb:mem:7e020073-c0fb-451b-b709-e4661e63e759;shutdown=false;hsqldb.tx=mvcc [2019-04-12 18:11:39,54] [info] Slf4jLogger started [2019-04-12 18:11:39,81] [info] Workflow heartbeat configuration: { "cromwellId" : "cromid-bb0a3a7", "heartbeatInterval" : "2 minutes", "ttl" : "10 minutes", "writeBatchSize" : 10000, "writeThreshold" : 10000 } [2019-04-12 18:11:39,85] [info] Metadata summary refreshing every 2 seconds. [2019-04-12 18:11:39,87] [info] WriteMetadataActor configured to flush with batch size 200 and process rate 5 seconds. [2019-04-12 18:11:39,87] [info] KvWriteActor configured to flush with batch size 200 and process rate 5 seconds. [2019-04-12 18:11:39,87] [info] CallCacheWriteActor configured to flush with batch size 100 and process rate 3 seconds. [2019-04-12 18:11:40,66] [info] JobExecutionTokenDispenser - Distribution rate: 50 per 1 seconds. [2019-04-12 18:11:40,67] [info] SingleWorkflowRunnerActor: Version 34 [2019-04-12 18:11:40,67] [info] SingleWorkflowRunnerActor: Submitting workflow [2019-04-12 18:11:40,71] [info] Unspecified type (Unspecified version) workflow 246e6d78-4d4a-4c47-b761-aa720ed485ad submitted [2019-04-12 18:11:40,76] [info] SingleWorkflowRunnerActor: Workflow submitted 246e6d78-4d4a-4c47-b761-aa720ed485ad [2019-04-12 18:11:40,76] [info] 1 new workflows fetched [2019-04-12 18:11:40,76] [info] WorkflowManagerActor Starting workflow 246e6d78-4d4a-4c47-b761-aa720ed485ad [2019-04-12 18:11:40,77] [warn] SingleWorkflowRunnerActor: received unexpected message: Done in state RunningSwraData [2019-04-12 18:11:40,79] [warn] Couldn't find a suitable DSN, defaulting to a Noop one. [2019-04-12 18:11:40,80] [info] Using noop to send events. [2019-04-12 18:11:40,82] [info] WorkflowManagerActor Successfully started WorkflowActor-246e6d78-4d4a-4c47-b761-aa720ed485ad [2019-04-12 18:11:40,82] [info] Retrieved 1 workflows from the WorkflowStoreActor [2019-04-12 18:11:40,82] [info] WorkflowStoreHeartbeatWriteActor configured to flush with batch size 10000 and process rate 2 minutes. [2019-04-12 18:11:40,82] [info] MaterializeWorkflowDescriptorActor [246e6d78]: Parsing workflow as WDL draft-2 [2019-04-12 18:13:20,78] [info] MaterializeWorkflowDescriptorActor [246e6d78]: Call-to-Backend assignments: atac.overlap_ppr -> Local, atac.count_signal_track -> Local, atac.macs2_pooled -> Local, atac.macs2 -> Local, atac.idr_ppr -> Local, atac.macs2_signal_track -> Local, atac.macs2_pr1 -> Local, atac.macs2_ppr1 -> Local, atac.read_genome_tsv -> Local, atac.count_signal_track_pooled -> Local, atac.qc_report -> Local, atac.macs2_signal_track_pooled -> Local, atac.spr -> Local, atac.pool_ta -> Local, atac.macs2_pr2 -> Local, atac.idr_pr -> Local, atac.overlap_pr -> Local, atac.overlap -> Local, atac.idr -> Local, atac.reproducibility_overlap -> Local, atac.filter -> Local, atac.bowtie2 -> Local, atac.macs2_ppr2 -> Local, atac.bam2ta -> Local, atac.trim_adapter -> Local, atac.reproducibility_idr -> Local, atac.xcor -> Local, atac.ataqc -> Local, atac.pool_ta_pr2 -> Local, atac.pool_ta_pr1 -> Local [2019-04-12 18:13:20,86] [warn] Local [246e6d78]: Key/s [cpu, memory, time, disks] is/are not supported by backend. Unsupported attributes will not be part of job executions. [2019-04-12 18:13:20,86] [warn] Local [246e6d78]: Key/s [cpu, memory, time, disks] is/are not supported by backend. Unsupported attributes will not be part of job executions. [2019-04-12 18:13:20,86] [warn] Local [246e6d78]: Key/s [cpu, memory, time, disks] is/are not supported by backend. Unsupported attributes will not be part of job executions. [2019-04-12 18:13:20,86] [warn] Local [246e6d78]: Key/s [cpu, memory, time, disks] is/are not supported by backend. Unsupported attributes will not be part of job executions. [2019-04-12 18:13:20,86] [warn] Local [246e6d78]: Key/s [cpu, memory, time, disks] is/are not supported by backend. Unsupported attributes will not be part of job executions. [2019-04-12 18:13:20,86] [warn] Local [246e6d78]: Key/s [cpu, memory, time, disks] is/are not supported by backend. Unsupported attributes will not be part of job executions. [2019-04-12 18:13:20,86] [warn] Local [246e6d78]: Key/s [cpu, memory, time, disks] is/are not supported by backend. Unsupported attributes will not be part of job executions. [2019-04-12 18:13:20,86] [warn] Local [246e6d78]: Key/s [cpu, memory, time, disks] is/are not supported by backend. Unsupported attributes will not be part of job executions. [2019-04-12 18:13:20,86] [warn] Local [246e6d78]: Key/s [cpu, memory, time, disks] is/are not supported by backend. Unsupported attributes will not be part of job executions. [2019-04-12 18:13:20,87] [warn] Local [246e6d78]: Key/s [cpu, memory, time, disks] is/are not supported by backend. Unsupported attributes will not be part of job executions. [2019-04-12 18:13:20,87] [warn] Local [246e6d78]: Key/s [cpu, memory, time, disks] is/are not supported by backend. Unsupported attributes will not be part of job executions. [2019-04-12 18:13:20,87] [warn] Local [246e6d78]: Key/s [cpu, memory, time, disks] is/are not supported by backend. Unsupported attributes will not be part of job executions. [2019-04-12 18:13:20,87] [warn] Local [246e6d78]: Key/s [cpu, memory, time, disks] is/are not supported by backend. Unsupported attributes will not be part of job executions. [2019-04-12 18:13:20,87] [warn] Local [246e6d78]: Key/s [cpu, memory, time, disks] is/are not supported by backend. Unsupported attributes will not be part of job executions. [2019-04-12 18:13:20,87] [warn] Local [246e6d78]: Key/s [cpu, memory, time, disks] is/are not supported by backend. Unsupported attributes will not be part of job executions. [2019-04-12 18:13:20,87] [warn] Local [246e6d78]: Key/s [cpu, memory, time, disks] is/are not supported by backend. Unsupported attributes will not be part of job executions. [2019-04-12 18:13:20,87] [warn] Local [246e6d78]: Key/s [cpu, memory, time, disks] is/are not supported by backend. Unsupported attributes will not be part of job executions. [2019-04-12 18:13:20,87] [warn] Local [246e6d78]: Key/s [cpu, memory, time, disks] is/are not supported by backend. Unsupported attributes will not be part of job executions. [2019-04-12 18:13:20,87] [warn] Local [246e6d78]: Key/s [cpu, memory, time, disks] is/are not supported by backend. Unsupported attributes will not be part of job executions. [2019-04-12 18:13:20,87] [warn] Local [246e6d78]: Key/s [cpu, memory, time, disks] is/are not supported by backend. Unsupported attributes will not be part of job executions. [2019-04-12 18:13:20,87] [warn] Local [246e6d78]: Key/s [cpu, memory, time, disks] is/are not supported by backend. Unsupported attributes will not be part of job executions. [2019-04-12 18:13:20,87] [warn] Local [246e6d78]: Key/s [preemptible, disks, cpu, time, memory] is/are not supported by backend. Unsupported attributes will not be part of job executions. [2019-04-12 18:13:20,87] [warn] Local [246e6d78]: Key/s [cpu, memory, time, disks] is/are not supported by backend. Unsupported attributes will not be part of job executions. [2019-04-12 18:13:20,88] [warn] Local [246e6d78]: Key/s [cpu, memory, time, disks] is/are not supported by backend. Unsupported attributes will not be part of job executions. [2019-04-12 18:13:20,88] [warn] Local [246e6d78]: Key/s [cpu, memory, time, disks] is/are not supported by backend. Unsupported attributes will not be part of job executions. [2019-04-12 18:13:20,88] [warn] Local [246e6d78]: Key/s [cpu, memory, time, disks] is/are not supported by backend. Unsupported attributes will not be part of job executions. [2019-04-12 18:13:20,88] [warn] Local [246e6d78]: Key/s [cpu, memory, time, disks] is/are not supported by backend. Unsupported attributes will not be part of job executions. [2019-04-12 18:13:20,88] [warn] Local [246e6d78]: Key/s [cpu, memory, time, disks] is/are not supported by backend. Unsupported attributes will not be part of job executions. [2019-04-12 18:13:20,88] [warn] Local [246e6d78]: Key/s [cpu, memory, time, disks] is/are not supported by backend. Unsupported attributes will not be part of job executions. [2019-04-12 18:13:20,88] [warn] Local [246e6d78]: Key/s [cpu, memory, time, disks] is/are not supported by backend. Unsupported attributes will not be part of job executions. [2019-04-12 18:13:23,05] [info] WorkflowExecutionActor-246e6d78-4d4a-4c47-b761-aa720ed485ad [246e6d78]: Condition NOT met: 'defined(genome_tsv)'. Bypassing conditional section [2019-04-12 18:13:35,30] [info] WorkflowExecutionActor-246e6d78-4d4a-4c47-b761-aa720ed485ad [246e6d78]: Condition NOT met: 'has_input_of_reproducibility_idr && !has_output_of_reproducibility_idr && !align_only && !true_rep_only && enable_idr'. Bypassing conditional section [2019-04-12 18:13:35,30] [info] WorkflowExecutionActor-246e6d78-4d4a-4c47-b761-aa720ed485ad [246e6d78]: Condition met: 'has_input_of_reproducibility_overlap && !has_output_of_reproducibility_overlap && !align_only && !true_rep_only'. Running conditional section [2019-04-12 18:13:35,30] [info] WorkflowExecutionActor-246e6d78-4d4a-4c47-b761-aa720ed485ad [246e6d78]: Condition NOT met: 'has_all_inputs_of_pool_ta_pr1 && !has_output_of_pool_ta_pr1 && !align_only && !true_rep_only && num_rep > 1'. Bypassing conditional section [2019-04-12 18:13:35,30] [info] WorkflowExecutionActor-246e6d78-4d4a-4c47-b761-aa720ed485ad [246e6d78]: Condition NOT met: 'has_all_inputs_of_pool_ta_pr1 && !has_output_of_pool_ta_pr1 && !align_only && !true_rep_only && num_rep > 1'. Bypassing conditional section [2019-04-12 18:13:35,30] [info] WorkflowExecutionActor-246e6d78-4d4a-4c47-b761-aa720ed485ad [246e6d78]: Condition NOT met: 'has_all_inputs_of_pool_ta && !has_output_of_pool_ta && num_rep > 1'. Bypassing conditional section [2019-04-12 18:13:38,36] [info] WorkflowExecutionActor-246e6d78-4d4a-4c47-b761-aa720ed485ad [246e6d78]: Condition NOT met: 'has_input_of_macs2_signal_track_pooled && !has_output_of_macs2_signal_track_pooled && num_rep > 1'. Bypassing conditional section [2019-04-12 18:13:38,36] [info] WorkflowExecutionActor-246e6d78-4d4a-4c47-b761-aa720ed485ad [246e6d78]: Condition NOT met: 'has_input_of_count_signal_track_pooled && !has_output_of_count_signal_track_pooled && enable_count_signal_track && num_rep > 1'. Bypassing conditional section [2019-04-12 18:13:38,36] [info] WorkflowExecutionActor-246e6d78-4d4a-4c47-b761-aa720ed485ad [246e6d78]: Condition NOT met: 'has_input_of_macs2_ppr2 && !has_output_of_macs2_ppr2 && !align_only && !true_rep_only && num_rep > 1'. Bypassing conditional section [2019-04-12 18:13:38,36] [info] WorkflowExecutionActor-246e6d78-4d4a-4c47-b761-aa720ed485ad [246e6d78]: Condition NOT met: 'has_input_of_macs2_ppr1 && !has_output_of_macs2_ppr1 && !align_only && !true_rep_only && num_rep > 1'. Bypassing conditional section [2019-04-12 18:13:38,36] [info] WorkflowExecutionActor-246e6d78-4d4a-4c47-b761-aa720ed485ad [246e6d78]: Condition NOT met: 'has_input_of_macs2_pooled && !has_output_of_macs2_pooled && !align_only && num_rep > 1'. Bypassing conditional section [2019-04-12 18:13:41,42] [info] WorkflowExecutionActor-246e6d78-4d4a-4c47-b761-aa720ed485ad [246e6d78]: Condition NOT met: 'has_input_of_overlap_ppr && !has_output_of_overlap_ppr && !align_only && !true_rep_only && num_rep > 1'. Bypassing conditional section [2019-04-12 18:13:41,42] [info] WorkflowExecutionActor-246e6d78-4d4a-4c47-b761-aa720ed485ad [246e6d78]: Condition NOT met: 'has_input_of_idr_ppr && !has_output_of_idr_ppr && !align_only && !true_rep_only && num_rep > 1'. Bypassing conditional section [2019-04-12 18:13:44,51] [info] WorkflowExecutionActor-246e6d78-4d4a-4c47-b761-aa720ed485ad [246e6d78]: Starting atac.reproducibility_overlap [2019-04-12 18:13:45,55] [error] WorkflowManagerActor Workflow 246e6d78-4d4a-4c47-b761-aa720ed485ad failed (during ExecutingWorkflowState): cromwell.engine.workflow.lifecycle.execution.job.preparation.JobPreparationActor$$anonfun$1$$anon$1: Call input and runtime attributes evaluation failed for reproducibility_overlap: Failed to lookup input value for required input chrsz at cromwell.engine.workflow.lifecycle.execution.job.preparation.JobPreparationActor$$anonfun$1.applyOrElse(JobPreparationActor.scala:65) at cromwell.engine.workflow.lifecycle.execution.job.preparation.JobPreparationActor$$anonfun$1.applyOrElse(JobPreparationActor.scala:61) at scala.runtime.AbstractPartialFunction.apply(AbstractPartialFunction.scala:34) at akka.actor.FSM.processEvent(FSM.scala:670) at akka.actor.FSM.processEvent$(FSM.scala:667) at cromwell.engine.workflow.lifecycle.execution.job.preparation.JobPreparationActor.processEvent(JobPreparationActor.scala:39) at akka.actor.FSM.akka$actor$FSM$$processMsg(FSM.scala:664) at akka.actor.FSM$$anonfun$receive$1.applyOrElse(FSM.scala:658) at akka.actor.Actor.aroundReceive(Actor.scala:517) at akka.actor.Actor.aroundReceive$(Actor.scala:515) at cromwell.engine.workflow.lifecycle.execution.job.preparation.JobPreparationActor.aroundReceive(JobPreparationActor.scala:39) at akka.actor.ActorCell.receiveMessage(ActorCell.scala:588) at akka.actor.ActorCell.invoke(ActorCell.scala:557) at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:258) at akka.dispatch.Mailbox.run(Mailbox.scala:225) at akka.dispatch.Mailbox.exec(Mailbox.scala:235) at akka.dispatch.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260) at akka.dispatch.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339) at akka.dispatch.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979) at akka.dispatch.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)

[2019-04-12 18:13:45,56] [info] WorkflowManagerActor WorkflowActor-246e6d78-4d4a-4c47-b761-aa720ed485ad is in a terminal state: WorkflowFailedState [2019-04-12 18:14:02,46] [info] SingleWorkflowRunnerActor workflow finished with status 'Failed'. [2019-04-12 18:14:04,89] [info] Workflow polling stopped [2019-04-12 18:14:04,90] [info] Shutting down WorkflowStoreActor - Timeout = 5 seconds [2019-04-12 18:14:04,90] [info] Shutting down WorkflowLogCopyRouter - Timeout = 5 seconds [2019-04-12 18:14:04,90] [info] Shutting down JobExecutionTokenDispenser - Timeout = 5 seconds [2019-04-12 18:14:04,91] [info] Aborting all running workflows. [2019-04-12 18:14:04,91] [info] JobExecutionTokenDispenser stopped [2019-04-12 18:14:04,91] [info] WorkflowStoreActor stopped [2019-04-12 18:14:04,91] [info] WorkflowLogCopyRouter stopped [2019-04-12 18:14:04,91] [info] Shutting down WorkflowManagerActor - Timeout = 3600 seconds [2019-04-12 18:14:04,91] [info] WorkflowManagerActor All workflows finished [2019-04-12 18:14:04,91] [info] WorkflowManagerActor stopped [2019-04-12 18:14:04,91] [info] Connection pools shut down [2019-04-12 18:14:04,91] [info] Shutting down SubWorkflowStoreActor - Timeout = 1800 seconds [2019-04-12 18:14:04,91] [info] Shutting down JobStoreActor - Timeout = 1800 seconds [2019-04-12 18:14:04,91] [info] Shutting down CallCacheWriteActor - Timeout = 1800 seconds [2019-04-12 18:14:04,91] [info] SubWorkflowStoreActor stopped [2019-04-12 18:14:04,91] [info] Shutting down ServiceRegistryActor - Timeout = 1800 seconds [2019-04-12 18:14:04,91] [info] Shutting down DockerHashActor - Timeout = 1800 seconds [2019-04-12 18:14:04,91] [info] Shutting down IoProxy - Timeout = 1800 seconds [2019-04-12 18:14:04,91] [info] CallCacheWriteActor Shutting down: 0 queued messages to process [2019-04-12 18:14:04,91] [info] JobStoreActor stopped [2019-04-12 18:14:04,91] [info] CallCacheWriteActor stopped [2019-04-12 18:14:04,91] [info] WriteMetadataActor Shutting down: 0 queued messages to process [2019-04-12 18:14:04,91] [info] DockerHashActor stopped [2019-04-12 18:14:04,91] [info] IoProxy stopped [2019-04-12 18:14:04,91] [info] KvWriteActor Shutting down: 0 queued messages to process [2019-04-12 18:14:04,91] [info] ServiceRegistryActor stopped [2019-04-12 18:14:04,93] [info] Database closed [2019-04-12 18:14:04,93] [info] Stream materializer shut down Workflow 246e6d78-4d4a-4c47-b761-aa720ed485ad transitioned to state Failed [2019-04-12 18:14:04,96] [info] Automatic shutdown of the async connection [2019-04-12 18:14:04,96] [info] Gracefully shutdown sentry threads. [2019-04-12 18:14:04,96] [info] Shutdown finished.

leepc12 commented 5 years ago

@lucapinello : That looks good. Now try with the test sample and see if that works.

$ export _JAVA_OPTIONS="-Xms16M -Xmx2G -XX:ParallelGCThreads=1"
$ java -jar ...
lucapinello commented 5 years ago

Unfortunately adding the java options didn't help:

(encode-atac-seq-pipeline) [lp698@ml003 atac-seq-pipeline]$ INPUT=examples/local/ENCSR356KRQ_subsampled.json (encode-atac-seq-pipeline) [lp698@ml003 atac-seq-pipeline]$ PIPELINE_METADATA=metadata.json (encode-atac-seq-pipeline) [lp698@ml003 atac-seq-pipeline]$ java -jar -Dconfig.file=backends/backend.conf cromwell-34.jar run atac.wdl -i ${INPUT} -m ${PIPELINE_METADATA} Picked up _JAVA_OPTIONS: -Xms16M -Xmx2G -XX:ParallelGCThreads=1 [2019-04-12 18:16:04,40] [info] Running with database db.url = jdbc:hsqldb:mem:91cd439a-fec1-4bb3-a008-1cd402188fa4;shutdown=false;hsqldb.tx=mvcc [2019-04-12 18:16:15,31] [info] Running migration RenameWorkflowOptionsInMetadata with a read batch size of 100000 and a write batch size of 100000 [2019-04-12 18:16:15,32] [info] [RenameWorkflowOptionsInMetadata] 100% [2019-04-12 18:16:15,41] [info] Running with database db.url = jdbc:hsqldb:mem:f741779c-f514-4afd-944b-5dc56fa0f879;shutdown=false;hsqldb.tx=mvcc [2019-04-12 18:16:15,66] [warn] This actor factory is deprecated. Please use cromwell.backend.google.pipelines.v1alpha2.PipelinesApiLifecycleActorFactory for PAPI v1 or cromwell.backend.google.pipelines.v2alpha1.PipelinesApiLifecycleActorFactory for PAPI v2 [2019-04-12 18:16:15,68] [warn] Couldn't find a suitable DSN, defaulting to a Noop one. [2019-04-12 18:16:15,69] [info] Using noop to send events. [2019-04-12 18:16:15,89] [info] Slf4jLogger started [2019-04-12 18:16:16,05] [info] Workflow heartbeat configuration: { "cromwellId" : "cromid-7b7aa87", "heartbeatInterval" : "2 minutes", "ttl" : "10 minutes", "writeBatchSize" : 10000, "writeThreshold" : 10000 } [2019-04-12 18:16:16,09] [info] Metadata summary refreshing every 2 seconds. [2019-04-12 18:16:16,12] [info] WriteMetadataActor configured to flush with batch size 200 and process rate 5 seconds. [2019-04-12 18:16:16,12] [info] KvWriteActor configured to flush with batch size 200 and process rate 5 seconds. [2019-04-12 18:16:16,12] [info] CallCacheWriteActor configured to flush with batch size 100 and process rate 3 seconds. [2019-04-12 18:16:16,58] [info] JobExecutionTokenDispenser - Distribution rate: 50 per 1 seconds. [2019-04-12 18:16:16,59] [info] JES batch polling interval is 33333 milliseconds [2019-04-12 18:16:16,59] [info] JES batch polling interval is 33333 milliseconds [2019-04-12 18:16:16,59] [info] JES batch polling interval is 33333 milliseconds [2019-04-12 18:16:16,60] [info] PAPIQueryManager Running with 3 workers [2019-04-12 18:16:16,60] [info] SingleWorkflowRunnerActor: Version 34 [2019-04-12 18:16:16,60] [info] SingleWorkflowRunnerActor: Submitting workflow [2019-04-12 18:16:16,64] [info] Unspecified type (Unspecified version) workflow f069f916-542d-47e3-be28-ace6b0d405d8 submitted [2019-04-12 18:16:16,67] [info] SingleWorkflowRunnerActor: Workflow submitted f069f916-542d-47e3-be28-ace6b0d405d8 [2019-04-12 18:16:16,68] [info] 1 new workflows fetched [2019-04-12 18:16:16,68] [info] WorkflowManagerActor Starting workflow f069f916-542d-47e3-be28-ace6b0d405d8 [2019-04-12 18:16:16,68] [warn] SingleWorkflowRunnerActor: received unexpected message: Done in state RunningSwraData [2019-04-12 18:16:16,68] [info] WorkflowManagerActor Successfully started WorkflowActor-f069f916-542d-47e3-be28-ace6b0d405d8 [2019-04-12 18:16:16,68] [info] Retrieved 1 workflows from the WorkflowStoreActor [2019-04-12 18:16:16,69] [info] WorkflowStoreHeartbeatWriteActor configured to flush with batch size 10000 and process rate 2 minutes. [2019-04-12 18:16:16,74] [info] MaterializeWorkflowDescriptorActor [f069f916]: Parsing workflow as WDL draft-2 [2019-04-12 18:18:44,88] [info] MaterializeWorkflowDescriptorActor [f069f916]: Call-to-Backend assignments: atac.reproducibility_overlap -> local, atac.idr_pr -> local, atac.macs2_signal_track_pooled -> local, atac.reproducibility_idr -> local, atac.pool_ta_pr2 -> local, atac.filter -> local, atac.spr -> local, atac.macs2_ppr1 -> local, atac.read_genome_tsv -> local, atac.macs2_pr1 -> local, atac.count_signal_track -> local, atac.count_signal_track_pooled -> local, atac.bam2ta -> local, atac.macs2_ppr2 -> local, atac.macs2_pr2 -> local, atac.pool_ta -> local, atac.idr -> local, atac.xcor -> local, atac.qc_report -> local, atac.ataqc -> local, atac.pool_ta_pr1 -> local, atac.overlap_pr -> local, atac.macs2 -> local, atac.overlap -> local, atac.bowtie2 -> local, atac.trim_adapter -> local, atac.macs2_pooled -> local, atac.idr_ppr -> local, atac.macs2_signal_track -> local, atac.overlap_ppr -> local [2019-04-12 18:18:44,93] [error] Error parsing generated wdl:

java.lang.RuntimeException: Error parsing generated wdl:

    at cromwell.backend.impl.sfs.config.ConfigWdlNamespace.<init>(ConfigWdlNamespace.scala:55)
    at cromwell.backend.impl.sfs.config.ConfigInitializationActor.configWdlNamespace$lzycompute(ConfigInitializationActor.scala:39)
    at cromwell.backend.impl.sfs.config.ConfigInitializationActor.configWdlNamespace(ConfigInitializationActor.scala:39)
    at cromwell.backend.impl.sfs.config.ConfigInitializationActor.declarationValidations$lzycompute(ConfigInitializationActor.scala:42)
    at cromwell.backend.impl.sfs.config.ConfigInitializationActor.declarationValidations(ConfigInitializationActor.scala:41)
    at cromwell.backend.impl.sfs.config.ConfigInitializationActor.runtimeAttributesBuilder$lzycompute(ConfigInitializationActor.scala:53)
    at cromwell.backend.impl.sfs.config.ConfigInitializationActor.runtimeAttributesBuilder(ConfigInitializationActor.scala:52)
    at cromwell.backend.standard.StandardInitializationActor.coerceDefaultRuntimeAttributes(StandardInitializationActor.scala:82)
    at cromwell.backend.BackendWorkflowInitializationActor.initSequence(BackendWorkflowInitializationActor.scala:154)
    at cromwell.backend.BackendWorkflowInitializationActor.initSequence$(BackendWorkflowInitializationActor.scala:152)
    at cromwell.backend.standard.StandardInitializationActor.initSequence(StandardInitializationActor.scala:44)
    at cromwell.backend.BackendWorkflowInitializationActor$$anonfun$receive$1.$anonfun$applyOrElse$1(BackendWorkflowInitializationActor.scala:145)
    at cromwell.backend.BackendLifecycleActor.performActionThenRespond(BackendLifecycleActor.scala:44)
    at cromwell.backend.BackendLifecycleActor.performActionThenRespond$(BackendLifecycleActor.scala:40)
    at cromwell.backend.standard.StandardInitializationActor.performActionThenRespond(StandardInitializationActor.scala:44)
    at cromwell.backend.BackendWorkflowInitializationActor$$anonfun$receive$1.applyOrElse(BackendWorkflowInitializationActor.scala:145)
    at akka.actor.Actor.aroundReceive(Actor.scala:517)
    at akka.actor.Actor.aroundReceive$(Actor.scala:515)
    at cromwell.backend.standard.StandardInitializationActor.aroundReceive(StandardInitializationActor.scala:44)
    at akka.actor.ActorCell.receiveMessage(ActorCell.scala:588)
    at akka.actor.ActorCell.invoke(ActorCell.scala:557)
    at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:258)
    at akka.dispatch.Mailbox.run(Mailbox.scala:225)
    at akka.dispatch.Mailbox.exec(Mailbox.scala:235)
    at akka.dispatch.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
    at akka.dispatch.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
    at akka.dispatch.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
    at akka.dispatch.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)

Caused by: java.lang.NullPointerException: null at wdl.draft2.model.WdlNamespace$.apply(WdlNamespace.scala:196) at wdl.draft2.model.WdlNamespace$.$anonfun$load$1(WdlNamespace.scala:160) at scala.util.Try$.apply(Try.scala:209) at wdl.draft2.model.WdlNamespace$.load(WdlNamespace.scala:160) at wdl.draft2.model.WdlNamespace$.loadUsingSource(WdlNamespace.scala:156) at cromwell.backend.impl.sfs.config.ConfigWdlNamespace.(ConfigWdlNamespace.scala:53) ... 27 common frames omitted

leepc12 commented 5 years ago

Please check if the above hotfix works to fix this issue. This will update backends/backend.conf to remove local backend from the backend file.

# apply hotfix
$ cd atac-seq-pipeline/
$ git pull
$ git checkout hotfix_v1.1.7.1
# re-run with updated backend.conf
$ java -jar -Dconfig.file=backends/backend.conf ...
nicolerg commented 5 years ago

That worked, thank you!

lucapinello commented 5 years ago

It worked for me as well. Thanks! Now I was able to execute all the steps except this one:

[2019-04-13 08:06:21,72] [error] WorkflowManagerActor Workflow ad2e4c64-67ad-4c8e-91ec-0ac7f40679ec failed (during ExecutingWorkflowState): Job atac.ataqc:1:1 exited with return code 1 which has not been declared as a valid return code. See 'continueOnReturnCode' runtime attribute for more details. Check the content of stderr for potential additional information: /data/pinello/SHARED_SOFTWARE/atac-seq-pipeline/cromwell-executions/atac/ad2e4c64-67ad-4c8e-91ec-0ac7f40679ec/call-ataqc/shard-1/execution/stderr. Traceback (most recent call last): File "/data/pinello/SHARED_SOFTWARE/anaconda3/envs/encode-atac-seq-pipeline/bin/encode_ataqc.py", line 12, in from run_ataqc import * File "/data/pinello/SHARED_SOFTWARE/anaconda3/envs/encode-atac-seq-pipeline/bin/run_ataqc.py", line 12, in import pysam ModuleNotFoundError: No module named 'pysam'

Job atac.ataqc:0:1 exited with return code 1 which has not been declared as a valid return code. See 'continueOnReturnCode' runtime attribute for more details. Check the content of stderr for potential additional information: /data/pinello/SHARED_SOFTWARE/atac-seq-pipeline/cromwell-executions/atac/ad2e4c64-67ad-4c8e-91ec-0ac7f40679ec/call-ataqc/shard-0/execution/stderr. Traceback (most recent call last): File "/data/pinello/SHARED_SOFTWARE/anaconda3/envs/encode-atac-seq-pipeline/bin/encode_ataqc.py", line 12, in from run_ataqc import * File "/data/pinello/SHARED_SOFTWARE/anaconda3/envs/encode-atac-seq-pipeline/bin/run_ataqc.py", line 12, in import pysam ModuleNotFoundError: No module named 'pysam'

@nicolerg do you see the same error?

nicolerg commented 5 years ago

@lucapinello not this time, but that particular issue is related to #105 and #107. I don’t remember exactly how I fixed this, but I think it has to do with either PATH or PYTHONPATH. Make sure the version of Conda you installed for the ENCODE pipeline is in your PATH variable (echo $PATH) and not a preinstalled version of Conda. Another potential solution is export PYTHONPATH= before activating the Conda environment.

leepc12 commented 5 years ago

Please add the following to your bash startup scripts (~/.bashrc) and re-login:

unset PYTHONPATH
leepc12 commented 5 years ago

Also, add this too.

export PYTHONNOUSERSITE=True
lucapinello commented 5 years ago

Thank you so much! I finally fixed the problem.

The lines you proposed were not necessary for me.

This is what I did:

I opened the .bashrc and removed all the conda related lines.

Then I executed

conda init bash

This added few lines to correctly manage the PYTHONPATH and PATH variable to load the envs.

Then I used the provided uninstalling and installing scripts for conda.

With the new version of conda source deactivate doesn't work so I replace in the scripts (conda/install_dependencies.sh conda/uninstall_dependencies.sh conda/update_conda_env.sh) this command with:

conda deactivate

Now it is working!

Thanks again for the fantastic pipeline and for taking the time to help me.