Closed LStruber closed 2 years ago
Note that you can relaunch the first pipeline, even after the error occured
I'm looking at it. Completion seems to fail for the second pipeline. I don't know why it's different and why it fails (yet), but I can reproduce the problem.
OK I think it's fixed: it did not actually depend on the 1st/second tab in the pipeline manager, but whether you had clicked on the node using the left button before exporting its parameters ;) I explain:
Process
instance). Once instantiated, the completion engine is cached and will be reused later.Node
instance, not the Process
instance (the Node
here contains a Process
but is not one). If the completion engine has been cached it will be reused (with the Process
registered with it), otherwise it will be created (but with the Node
registered with it). It should work using either the Node
or the Process
.Nodes
, it actually expected a Process
, thus was failing when instantiated using the Node
. This explains the problem.
It has been fixed now. You can check.@LStruber can you check and report from your side, please ?
It's indeed working now. I close the ticket.
Minimum steps to reproduce:
It fails during initialization with the following message:
Tested on a fresh bv_maker, with populse_mia on master and mia_processes on master, both up-to-date
NOTE: If you run the brick alone (without exporting plugs), it works !
By investigating a bit, I identified the problem: In
init_pipeline()
function ofpipeline_manage_tab.py
, the workflow returned byworkflow_from_pipeline
function (of Capsul) should contains the output file (in example casesmoothed_files
). This is the case at the initialization of the first brick, and not the second time (seeself.workflow.jobs[1].param_dict['smoothed_files']
after creating the workflow in both cases). This make me think that the issue could come from Capsul... @denisri would you have an idea why "sometimes" the output file is not present in the workflowparam_dict
?