Scientific workflow engine designed for simplicity & scalability. Trivially transition between one off use cases to massive scale production environments
Hi Everyone,
We have updated Cromwell from version 51 to 82 recently, and changed the following line in Dockerfile:
FROM broadinstitute/cromwell:51 --> FROM broadinstitute/cromwell:82
Then we had an issue with the parameter scriptBucketName in aws.conf which seems to be a new parameter introduced. So we modified the aws.conf file as follows:
concurrent-job-limit = 10000
numSubmitAttempts = 6
numCreateDefinitionAttempts = 6
// Base bucket for workflow executions
root = ${EXECUTION_BUCKET_ROOT_URL}
// A reference to an auth defined in the `aws` stanza at the top. This auth is used to create
// Jobs and manipulate auth JSONs.
auth = "xxxxxx"
default-runtime-attributes {
queueArn: ${AWS_BATCH_QUEUE}
scriptBucketName: "${SCRIPT_BUCKET_NAME}"
}
filesystems {
s3 {
// A reference to a potentially different auth for manipulating files via engine functions.
auth = "default"
}
}
# Emit a warning if jobs last longer than this amount of time. This might indicate that something got stuck in the cloud.
slow-job-warning-time: 3 hours
}
},
Q1. What is scriptBucketName ? I know it says in the documentation that it is where the scripts are stored/written by Cromwell.
For example, if our root bucket is s3://1234-bla-bla-executor/cromwell-execution, should scriptBucketName be "1234-bla-bla-executor" ? I understand that we are giving the full path in the root bucket, but is it related or completely unrelated to scriptBucketName ?
It looks like Cromwell is able to create script and reconfigured-script.sh files in the specified s3 bucket, but it doesn't create or find the executeSql-rc.txt and a whole bunch of other files as well which are there in the workflow.
Q2. Is there anything we need to change in the launch template for AWS Batch backend ?
cd /opt && wget $artifactRootUrl/aws-ebs-autoscale.tgz && tar -xzf aws-ebs-autoscale.tgz
sh /opt/ebs-autoscale/bin/init-ebs-autoscale.sh $scratchPath /dev/sdc 2>&1 > /var/log/init-ebs-autoscale.log
cd /opt && wget $artifactRootUrl/aws-ecs-additions.tgz && tar -xzf aws-ecs-additions.tgz
sed -i 's#quay.io/broadinstitute/cromwell-aws-proxy:latest#1234.dkr.ecr.xx-xxxxxxxx.amazonaws.com/cromwell-aws-proxy:latest#g' /opt/ecs-additions/ecs-additions-cromwell.sh
sed -i 's#elerch/amazon-ecs-agent:latest#1234.dkr.ecr.xxxxxxxx.amazonaws.com/cromwell-aws-ecs-agent:latest#g' /opt/ecs-additions/ecs-additions-cromwell.sh
sed -i 's#elerch/amazon-ecs-agent#1234.dkr.ecr.xx-xxxxxx.amazonaws.com/cromwell-aws-ecs-agent#g' /opt/ecs-additions/ecs-additions-cromwell.sh
Hi Everyone, We have updated Cromwell from version 51 to 82 recently, and changed the following line in Dockerfile:
FROM broadinstitute/cromwell:51 --> FROM broadinstitute/cromwell:82
Then we had an issue with the parameter scriptBucketName in aws.conf which seems to be a new parameter introduced. So we modified the aws.conf file as follows:
aws.conf
backend { default = "AWSBATCH" providers { AWSBATCH { actor-factory = "cromwell.backend.impl.aws.AwsBatchBackendLifecycleActorFactory" config {
Q1. What is scriptBucketName ? I know it says in the documentation that it is where the scripts are stored/written by Cromwell. For example, if our root bucket is s3://1234-bla-bla-executor/cromwell-execution, should scriptBucketName be "1234-bla-bla-executor" ? I understand that we are giving the full path in the root bucket, but is it related or completely unrelated to scriptBucketName ?
It looks like Cromwell is able to create script and reconfigured-script.sh files in the specified s3 bucket, but it doesn't create or find the executeSql-rc.txt and a whole bunch of other files as well which are there in the workflow.
Q2. Is there anything we need to change in the launch template for AWS Batch backend ?
Currently this is our launch template:
runcmd:
sh /opt/ecs-additions/ecs-additions-cromwell.sh
Any help would be greatly appreciated!