Open lukasjelonek opened 6 years ago
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
I'm seeing the same problem
process VariableTesting {
input:
val foo
exec:
if(foo[0] == "a") log.info "Found an A"
def bar = foo.find {it == "d"}
}
workflow {
inputs = Channel.from(["a", "b", "c"], ["d", "e", "f"])
inputs | VariableTesting
}
N E X T F L O W ~ version 20.10.0
Launching `./main.nf` [infallible_solvay] - revision: 365bf7c54f
Script compilation error
- file : /path/to/main.nf
- cause: Variable `foo` already defined in the process scope @ line 28, column 15.
def bar = foo.find {
^
1 error
It works if I comment out one of the two exec lines, but if foo is used in both the if
statment and in the find
, then the error occurs.
I have the same problem. Here's my process which works as written:
process prepare_spark_work_dir {
container = "${params.spark_container_repo}/${params.spark_container_name}:${params.spark_container_version}"
label 'small'
input:
val(spark_work_dir)
val(terminate_name)
output:
val(spark_work_dir)
script:
def cluster_id = UUID.randomUUID()
def cluster_work_dir = "${spark_work_dir}/${cluster_id}"
def terminate_file_name = get_terminate_file_name(cluster_work_dir, terminate_name)
def write_session_id = create_write_session_id_script(cluster_work_dir)
log.debug "Cluster work directory: ${cluster_work_dir}"
"""
if [[ ! -d "${cluster_work_dir}" ]] ; then
mkdir -p "${cluster_work_dir}"
else
rm -f ${cluster_work_dir}/* || true
fi
${write_session_id}
"""
}
When I add a debug statement at the top of the script, it breaks:
process prepare_spark_work_dir {
container = "${params.spark_container_repo}/${params.spark_container_name}:${params.spark_container_version}"
label 'small'
input:
val(spark_work_dir)
val(terminate_name)
output:
val(spark_work_dir)
script:
log.debug "Cluster work directory: ${spark_work_dir}"
def cluster_id = UUID.randomUUID()
def cluster_work_dir = "${spark_work_dir}/${cluster_id}"
def terminate_file_name = get_terminate_file_name(cluster_work_dir, terminate_name)
def write_session_id = create_write_session_id_script(cluster_work_dir)
log.debug "Cluster work directory: ${cluster_work_dir}"
"""
if [[ ! -d "${cluster_work_dir}" ]] ; then
mkdir -p "${cluster_work_dir}"
else
rm -f ${cluster_work_dir}/* || true
fi
${write_session_id}
"""
}
N E X T F L O W ~ version 21.04.1
Launching `./pipelines/n5_converter.nf` [soggy_lavoisier] - revision: 2e1927bac5
Module compilation error
- file : /groups/scicompsoft/home/rokickik/dev/expansion-microscopy-pipeline/pipelines/../workflows/../external-modules/spark/lib/./processes.nf
- cause: Variable `spark_work_dir` already defined in the process scope @ line 15, column 31.
def cluster_work_dir = "${spark_work_dir}/${cluster_id}"
^
1 error
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
Issue still present on v21.12.1-edge
and v22.03.0-edge
I just encountered this with 22.10.0
. Changing the order of my if
and def
statements under script:
so the if
statements occurred after the def
statements fixed it for me.
I have this same issue when using the params
object: print(params)
produces the following error:
- file : /mnt/ssd_disk/git/wes/pipeline_test.nf
- cause: Variable `params` already defined in the process scope @ line 21, column 44.
Bug report
Expected behavior and actual behavior
Using some groovy code before the script generation string or in the exec block causes a bug when a variable is declared in a closure. The example below contains a workaround by introducing a new variable and assigning it with the value that shall be used.
Program output
Steps to reproduce the problem
Environment