populse / populse_mia

Multiparametric Image Analysis
Other
10 stars 9 forks source link

[general need] Save pipeline/history information in the database #263

Closed LStruber closed 2 years ago

LStruber commented 2 years ago

In order to keep and be able to show history of files present in the main database (see #262), we need to save pipeline information in the database (indeed, brick tag is not enough alone to retrieve all information of the history, cf. discussion in #236).

In order to do that, we need to create a new collection (at least), let's say COLLECTION_HISTORY, that will contains this information. This collection will minimally need to contain:

The main database will then have for each file entry a history field containing the uuid of the corresponding history in the COLLECTION_HISTORY.

This ticket is open to discuss the structure of the COLLECTION_HISTORY and the links between different collection of MIA. For example, if we introduce the link between COLLECTION_HISTORY and COLLECTION_CURRENT (via the uuid of history in a field), do we need to keep the link between COLLECTION_CURRENT and COLLECTION_BRICK, which consists of a field containing the uuid of the last brick that created the file, or do we see the COLLECTION_HISTORY as an intermediate collection which stores all links between the history and corresponding bricks (a field that contains all uuid of bricks involved in the pipeline) ? I think that in this last case, it may be tricky to determine which brick truly created the file (and we could fall again in resolving conflicts as discussed in #236), so we would need to keep both links in the main database (for each file storing in different fields the uuid of the history and the uuid of the last brick)

servoz commented 2 years ago

I think it will be easier to keep the link between COLLECTION_CURRENT and COLLECTION_BRICK (last brick) and the link between COLLECTION_HISTORY (whole pipeline) and COLLECTION_CURRENT. no ?

sapetnioc commented 2 years ago

Each history document is related to the execution of a Process instance. We decided to save the process instance. It means saving the Process as well as its parameters. We also need to be able to know what are the nodes that have actually been executed. Finally, there is a technical need to be able to quickly find the links between an input (or output) file name and an history document identifier. Due to limitations of populse_db v2, I suggests to start by storing them in two lists (even if it is redundant with process parameters field). This will allow the following query to find all the uuids of the history document that used the file /somewhere/database/image.nii:

db.filter_documents('history', fields=('uuid',), as_list=True, filter='"/somewhere/database/image.nii" IN input_files')

Therefore, here is a suggestion of evolution for history collection fields definition:

LStruber commented 2 years ago

If we have all uuid of bricks documents contained in the process, isn't that enough ? Each brick document contain:

LStruber commented 2 years ago

@servoz I feel like it indeed would be easier

sapetnioc commented 2 years ago

I do not really know the existing tables therefore I am thinking as if history collection was self content. But since we decided to allow to change the internal structure in future release, I think hat for now the priority is on usage (such as GUI) rather than internal storage and API. Therefore, if you already have necessary information somewhere why not using it.

However, the next big step will be to support history management in Capsul v3. For that step we may choose to move (or duplicate) some information from "bricks" collection to another collection such as history or another one if it makes sense. I do not like the idea to use brick collection name in Capsul because there is no brick in the project. We have executable, process, pipeline; if we want to use brick, we will have to define how it differs from the other concepts. Of course this just my point of view today, this will have be discussed. But for now the focus for history management is on Mia.

LStruber commented 2 years ago

Ok. So for now I will create a new collection history that contains:

I think that will be enough to display history as we discussed, and we'll think on how to merge/change both collections for v3 after.

servoz commented 2 years ago

It is true that we have chosen, in Mia, different names for quite similar concepts.

Tell me if I am wrong but I think that in Capsul a process is an atomic element of computation (a unit operation, for example a Smooth), a class, and a node is the graphical representation of this process (a graphical box representing this process).

In Mia we choseen the term brick for this graphical representation without making any conceptual difference with what is called process in Capsul.

In Mia the brick is as much the graphical representation (the graphical box) as the calculation machine (the class). In fact it's simple (as always I put myself in the shoes of the average user for whom what's in the brick/node (the process) is of little importance, what he wants above all is to have a brick that allows smoothing, in the library, and to be able to put it in a pipeline), and as always what is simple is reductive.

I have no problem with adopting the terms nodes and processes in the Mia instead of the general term brick.

We could rename the tag/collection Bricks, by Nodes or Processes (here we find again the duality: for the lambda user, the most important thing is the graphical box that matters to him, the node. For an informed user, the box is only a box, the most important is the process...).

I don't know what to choose between Nodes and Processes, one is a simplistic and graphical vision, the other is more precise, not graphical... so I let you choose!!!!

sapetnioc commented 2 years ago

If browsing the graph that links file names and history document is often use, I would recommend to find a way to quickly associate a file name with a history document uuid. In populse_db v2, there is no database join nor index on elements that are not directly stored in a field (i.e. elements in list or dict). For instance, given a file name, finding the history uuid that created it will require to iterate over all bricks, find the ones using the file, select the last one (how ?) and then iterate over all history documents to find the one having a match on brick_list. This will be highly inefficient when history becomes big. And this will have to be done to follow each link on the graph.

My suggestion would be to create one or two collections putting together two items:

If we choose only one collection, it would be necessary to add a boolean to indicate if the path is used as input or output. Otherwise we could also create two collections, one for inputs and one for output.

In populse_db v2, primary key can only be a single field but, at least for input files, there is no unique path nor history_uuid. Therefore, we will probably need to add a primary key in v2 even if we do not need it (this will not be the case in v3).

servoz commented 2 years ago

If we choose only one collection, it would be necessary to add a boolean to indicate if the path is used as input or output. Otherwise we could also create two collections, one for inputs and one for output.

I understood your explanation. But I don't know how to choose between a collection with a boolean and two collections. For efficiency, speed, what do you recommend?

LStruber commented 2 years ago

For instance, given a file name, finding the history uuid that created it will require to iterate over all bricks, find the ones using the file, select the last one (how ?) and then iterate over all history documents to find the one having a match on brick_list.

Of course we do not want to iterate over all bricks to find the history uuid. To be clear, for now we have the collection of documents COLLECTION_CURRENT that have a BRICK_TAG which contains the uuid of the last brick that created the file (empty for raw data)

What I'm currently doing is to create a COLLECTION_HISTORY that contains the pipeline xml string and the list of all bricks uuid used in the pipeline. I'm also adding to COLLECTION_CURRENT an HISTORY_TAG that contain the uuid of the corresponding entry in the COLLECTION_HISTORY. With that, for each document we can directly access the history that created it (without looping on bricks tag) and then retrieve all bricks contained in the pipeline.

servoz commented 2 years ago

What I'm currently doing is to create a COLLECTION_HISTORY that contains the pipeline xml string and the list of all bricks uuid used in the pipeline. I'm also adding to COLLECTION_CURRENT an HISTORY_TAG that contain the uuid of the corresponding entry in the COLLECTION_HISTORY. With that, for each document we can directly access the history that created it (without looping on bricks tag) and then retrieve all bricks contained in the pipeline.

Ok so we only need one collection (COLLECTION_HISTORY, in place of COLLECTION_BRICK) and a tag in COLLECTION_CURRENT (the HISTORY_TAG), isn't it ?

LStruber commented 2 years ago

I kept the COLLECTION_BRICK just to fastly retrieve the last brick that created a file, but it is indeed questionable

LStruber commented 2 years ago

In order to (de)serialize a pipeline into an xml string, I suppose I'll have to use functions in capsul.pipeline.xml. However these functions seems to (de)serialize pipelines directly into files and do not allow to retrieve the string. Should I divide these function to allow the possibility to retrieve the string without saving it into a file ? Or there is another way ?

denisri commented 2 years ago

We'll modify them to work on file-like objects as well. I'll do that.

denisri commented 2 years ago

Yo should now be able to do:

from capsul.pipeline import pipeline_tools
import io

buffer = io.BytesIO()
pipeline_tools.save_pipeline(pipeline, buffer, format='xml')
string = buffer.getvalue()

Note that the xml format (based on xml.etree) seems to only work with byte buffers (BytesIO) while the python formats allows unicode buffers (StringIO), at least in pyhon3.

LStruber commented 2 years ago

Thank you. And how to deserialize pipeline from the string ?

LStruber commented 2 years ago

When I try to serialize pipeline with pipeline_tools.save_pipeline, i get the following error:

Traceback:
  File "/home/lucas/populse_dev/populse_mia/python/populse_mia/user_interface/pipeline_manager/pipeline_manager_tab.py", line 1303, in initialize
    self.test_init = self.init_pipeline()
  File "/home/lucas/populse_dev/populse_mia/python/populse_mia/user_interface/pipeline_manager/pipeline_manager_tab.py", line 1622, in init_pipeline
    pipeline_tools.save_pipeline(pipeline, buffer, format='xml')
  File "/casa/host/src/capsul/master/capsul/pipeline/pipeline_tools.py", line 1237, in save_pipeline
    writer(pipeline, file)
  File "/casa/host/src/capsul/master/capsul/pipeline/xml.py", line 574, in save_xml_pipeline
    gui_node = _write_nodes_positions(pipeline, root)
  File "/casa/host/src/capsul/master/capsul/pipeline/xml.py", line 526, in _write_nodes_positions
    node_pos.set('x', six.text_type(pos[0]))
TypeError: 'QPointF' object does not support indexing

Here is the code I use during initialization:

buffer = io.BytesIO()
pipeline_tools.save_pipeline(pipeline, buffer, format='xml')
pipeline_xml = buffer.getvalue()
denisri commented 2 years ago

Thank you. And how to deserialize pipeline from the string ?

I'm also modifying get_process_instance(), I'll tell you.

denisri commented 2 years ago

When I try to serialize pipeline with pipeline_tools.save_pipeline, i get the following error:

Traceback:
  File "/home/lucas/populse_dev/populse_mia/python/populse_mia/user_interface/pipeline_manager/pipeline_manager_tab.py", line 1303, in initialize
    self.test_init = self.init_pipeline()
  File "/home/lucas/populse_dev/populse_mia/python/populse_mia/user_interface/pipeline_manager/pipeline_manager_tab.py", line 1622, in init_pipeline
    pipeline_tools.save_pipeline(pipeline, buffer, format='xml')
  File "/casa/host/src/capsul/master/capsul/pipeline/pipeline_tools.py", line 1237, in save_pipeline
    writer(pipeline, file)
  File "/casa/host/src/capsul/master/capsul/pipeline/xml.py", line 574, in save_xml_pipeline
    gui_node = _write_nodes_positions(pipeline, root)
  File "/casa/host/src/capsul/master/capsul/pipeline/xml.py", line 526, in _write_nodes_positions
    node_pos.set('x', six.text_type(pos[0]))
TypeError: 'QPointF' object does not support indexing

Here is the code I use during initialization:

buffer = io.BytesIO()
pipeline_tools.save_pipeline(pipeline, buffer, format='xml')
pipeline_xml = buffer.getvalue()

ah crap. it's an unexpected type in the xml save code. A bug. I need to look at it.

denisri commented 2 years ago

can you try again please ?

LStruber commented 2 years ago

It's better, no error in the conversion. However I can't put the result in a 'FIELD_TYPE_STRING' into the database, and the FIELD_TYPE_BYTES does not exists. I'll work on fixing that:

Error during initialisation of the "NoName" pipeline ...!
Traceback:
  File "/home/lucas/populse_dev/populse_mia/python/populse_mia/user_interface/pipeline_manager/pipeline_manager_tab.py", line 1303, in initialize
    self.test_init = self.init_pipeline()
  File "/home/lucas/populse_dev/populse_mia/python/populse_mia/user_interface/pipeline_manager/pipeline_manager_tab.py", line 1626, in init_pipeline
    {HISTORY_PIPELINE: pipeline_xml})
  File "/casa/host/src/populse/populse_db/master/python/populse_db/database.py", line 572, in set_values
    raise ValueError("The value {0} is invalid for the type {1}".format(value, field_row.field_type))
ValueError: The value b'<pipeline capsul_xml="2.0" name="CustomPipeline"><doc /><process module="mia_processes.bricks.preprocess.spm.spatial_preprocessing.Smooth" name="smooth_1"><set name="in_files" value="[\'/home/lucas/populse_dev/data/projects/data_test_263/data/raw_data/alej170316_test24042018-IRMFonct_+perfusion-2016-03-17083444-4-T13DSENSE-T1TFE-000425_000.nii\']" /><set name="fwhm" value="[6.0, 6.0, 6.0]" /><set name="data_type" value="0" /><set name="implicit_masking" value="False" /><set name="out_prefix" value="\'s\'" /><set name="smoothed_files" value="\'/home/lucas/populse_dev/data/projects/data_test_263/data/derived_data/salej170316_test24042018-IRMFonct_+perfusion-2016-03-17083444-4-T13DSENSE-T1TFE-000425_000.nii\'" /><set name="output_directory" value="\'/home/lucas/populse_dev/data/projects/data_test_263/data/derived_data\'" /><set name="use_mcr" value="True" /><set name="paths" value="[\'/home/lucas/apps/MATLAB/MATLAB_Runtime/v95/toolbox/spm12\']" /><set name="matlab_cmd" value="\'/home/lucas/apps/MATLAB/MATLAB_Runtime/v95/toolbox/spm12/run_spm12.sh /home/lucas/apps/MATLAB/MATLAB_Runtime/v95 script\'" /><set name="mfile" value="True" /><set name="spm_script_file" value="\'/home/lucas/populse_dev/data/projects/data_test_263/scripts/pyscript_smooth_5557ca0a-6f3f-4016-82ca-2cbe2f4251aa.m\'" /></process><link dest="smooth_1.in_files" source="in_files" /><link dest="smoothed_files" source="smooth_1.smoothed_files" /><gui><position name="smooth_1" x="&lt;built-in method x of QPointF object at 0x7f3e14e9f828&gt;" y="&lt;built-in method y of QPointF object at 0x7f3e14e9f828&gt;" /><position name="inputs" x="&lt;built-in method x of QPointF object at 0x7f3e14e9f9e8&gt;" y="&lt;built-in method y of QPointF object at 0x7f3e14e9f9e8&gt;" /><position name="outputs" x="&lt;built-in method x of QPointF object at 0x7f3e14e9f898&gt;" y="&lt;built-in method y of QPointF object at 0x7f3e14e9f898&gt;" /></gui></pipeline>' is invalid for the type string
LStruber commented 2 years ago

I made it works by decoding the bytes array into an utf-8 string:

self.project.session.set_values(
                COLLECTION_HISTORY, history_id,
                {HISTORY_PIPELINE: pipeline_xml.decode("utf-8")})

but as it necessary to specifiy the coding (that may change ?), I'm not sure this is a good idea. What do you think ?

sapetnioc commented 2 years ago

It would be better to direclty get a str/unicode rather than a byte when converting the pipeline in XML. Doesn't the save_pipeline() support using a io.StringIO instead of io.BytesIO ? If not, we could add a save_pipeline_string method that would do the encoding.

But since we plan to go to 3.0 where pipelines are serialized in JSON (JSON is always in UTF-8), it is really safe to assume that encoding will not change before the code that save the pipelines.

sapetnioc commented 2 years ago

@servoz for the question about one or two collections, I do not clearly foresee the impact of the decision on performances. Therefore, I propose to choose the simplest solution: one collection. When creating this collection fields, do not forget to add an index on path and uuid (using create_field('path', 'string', index=True)) this will speed-up graph browsing.

LStruber commented 2 years ago

@sapetnioc I think that @denisri answered to your question yesterday:

It would be better to direclty get a str/unicode rather than a byte when converting the pipeline in XML. Doesn't the save_pipeline() support using a io.StringIO instead of io.BytesIO ? If not, we could add a save_pipeline_string method that would do the encoding.

Note that the xml format (based on xml.etree) seems to only work with byte buffers (BytesIO) while the python formats allows unicode buffers (StringIO), at least in pyhon3.

denisri commented 2 years ago

Yes that's it: xml.etree doesn't seem to accept sting/unicode buffers. But as @LStruber said it's just a matter of decoding the bytes buffer into a unicode string. We can consider doing it inside the functions to load/save XML buffers, it's probably possible.

denisri commented 2 years ago

I have added internal conversions so that it's possible to write a pipeline as XML in a StingIO, and read it from a unicode string.

servoz commented 2 years ago

@servoz for the question about one or two collections, I do not clearly foresee the impact of the decision on performances. Therefore, I propose to choose the simplest solution: one collection. When creating this collection fields, do not forget to add an index on path and uuid (using create_field('path', 'string', index=True)) this will speed-up graph browsing.

OK !

LStruber commented 2 years ago

@denisri, when calling create_xml_pipeline(module, name, xml_file) from a xml_string, I'm not sure how to specify (and also what is stand for ?) the module parameter, could you tell me a bit about it ?

LStruber commented 2 years ago

Note that if I set module to None create_xml_pipeline fails with following error:

File "/home/lucas/projects/populse/populse_mia/python/populse_mia/user_interface/pop_ups.py", line 4967, in __init__
    pipeline = create_xml_pipeline(None, None, pipeline_xml)
  File "/home/lucas/projects/populse/capsul/capsul/pipeline/xml.py", line 251, in create_xml_pipeline
    x = float(gui_child.get('x'))
ValueError: could not convert string to float: '<built-in method x of QPointF object at 0x7fe2ee2d8270>'

I'm not sure if it is related or not

denisri commented 2 years ago

As far as I remember, the module is the name of the module where the pipeline class will be inserted in.

The ValueError seems to be something else (QPointF.x instead of QPointF.x() I guess) - I'll fix that.

denisri commented 2 years ago

Can you can try again please ?

LStruber commented 2 years ago

It changed the error but not the line:

File "/home/lucas/projects/populse/capsul/capsul/pipeline/xml.py", line 251, in create_xml_pipeline
    x = float(gui_child.get('x')())
TypeError: 'str' object is not callable
denisri commented 2 years ago

I'll try it better but it will have to wait for an hour or so...

LStruber commented 2 years ago

No problem, tell me when !

As far as I remember, the module is the name of the module where the pipeline class will be inserted in.

What do you mean by "inserted in" ? For example, if a create the pipeline from the xml_string in the pop_ups.py module, and then I add it to a PipelineDeveloperView as follows: self.pipeline_view = PipelineDeveloperView(pipeline, allow_open_controller=True) Which will be the module name ?

denisri commented 2 years ago

It changed the error but not the line:

File "/home/lucas/projects/populse/capsul/capsul/pipeline/xml.py", line 251, in create_xml_pipeline
    x = float(gui_child.get('x')())
TypeError: 'str' object is not callable

Oh OK the problem was not in the reading part but in the writing part, I misunderstood the quotes - it was the string "<built-in method x of QPointF object ...>" which got to be converted to float, not the method... I tried to fox it again (but didn't test it). Thus you'll have top write the pipeline again.

denisri commented 2 years ago

No problem, tell me when !

As far as I remember, the module is the name of the module where the pipeline class will be inserted in.

What do you mean by "inserted in" ? For example, if a create the pipeline from the xml_string in the pop_ups.py module, and then I add it to a PipelineDeveloperView as follows: self.pipeline_view = PipelineDeveloperView(pipeline, allow_open_controller=True) Which will be the module name ?

Actually we normally don't use directly the function create_xml_pipeline(), it is called from within get_process_instance():

engine = capsul_engine()
pipeline = engine.get_process_instance(xml_string)

and None seems valid for the module param, since it is called this way from get_process_instance.

LStruber commented 2 years ago

Thanks

and None seems valid for the module param, since it is called this way from get_process_instance.

I did not find where it is used with None value as module param, I found None only for the name param but ok

LStruber commented 2 years ago

It is now working, I can deserialize the pipeline from the xml_string, however after adding it a PipelineDeveloperView, I cannot open the node_controller:

Traceback (most recent call last):
  File "/home/lucas/projects/populse/capsul/capsul/qt_gui/widgets/pipeline_developer_view.py", line 3128, in openProcessController
    ce = ProcessCompletionEngine.get_completion_engine(process)
  File "/home/lucas/projects/populse/capsul/capsul/attributes/completion_engine.py", line 649, in get_completion_engine
    if 'capsul.engine.module.attributes' in engine._loaded_modules:
ReferenceError: weakly-referenced object no longer exists
denisri commented 2 years ago

Ow, that's "funny"... Can I have an example of your XML pipeline ?

LStruber commented 2 years ago

<pipeline capsul_xml="2.0" name="CustomPipeline"><doc /><process module="mia_processes.bricks.preprocess.spm.spatial_preprocessing.Smooth" name="smooth_1"><set name="in_files" value="['/home/lucas/projects/mia_projects/data_cevestoc_1/data/raw_data/alej170316_test24042018-IRMFonct_+perfusion-2016-03-17083444-0-T13DSENSE-T1TFE-000425_000.nii']" /><set name="fwhm" value="[6.0, 6.0, 6.0]" /><set name="data_type" value="0" /><set name="implicit_masking" value="False" /><set name="out_prefix" value="'s'" /><set name="smoothed_files" value="'/home/lucas/projects/mia_projects/data_cevestoc_1/data/derived_data/salej170316_test24042018-IRMFonct_+perfusion-2016-03-17083444-0-T13DSENSE-T1TFE-000425_000.nii'" /><set name="output_directory" value="'/home/lucas/projects/mia_projects/data_cevestoc_1/data/derived_data'" /><set name="use_mcr" value="True" /><set name="paths" value="['/APPS/MATLAB/MATLAB_Runtime/v95/toolbox/spm12']" /><set name="matlab_cmd" value="'/APPS/MATLAB/MATLAB_Runtime/v95/toolbox/spm12/run_spm12.sh /APPS/MATLAB/MATLAB_Runtime/v95 script'" /><set name="mfile" value="True" /><set name="spm_script_file" value="'/home/lucas/projects/mia_projects/data_cevestoc_1/scripts/pyscript_smooth_cdd1e4eb-fd2b-4f23-805e-49d1d4653eba.m'" /></process><link source="in_files" dest="smooth_1.in_files" /><link source="smooth_1.smoothed_files" dest="smoothed_files" /><gui><position name="smooth_1" x="90.0" y="-119.0" /><position name="inputs" x="-167.6095716389599" y="-1.0" /><position name="outputs" x="499.4246679185573" y="-119.0" /></gui></pipeline>

denisri commented 2 years ago

At what time does the error occur ? I can't trigger the problem. I can even load it in populse_mia (load pipeline, then copy/paste your xml string in the pipeline line), and see the pipeline boxes, edit values etc. I haven't tried to actually run it, but if I understand the problem is earlier ?

servoz commented 2 years ago

Since, I think, this commit, Mia crashes if we click on the Bricks tag.

Ex.

Traceback (most recent call last):
  File "/data/Git_Projects/populse_mia/python/populse_mia/user_interface/data_browser/data_browser.py", line 2167, in show_brick_history
    self.brick_history_popup = PopUpShowHistory(
  File "/data/Git_Projects/populse_mia/python/populse_mia/user_interface/pop_ups.py", line 4967, in __init__
    pipeline = engine.get_process_instance(pipeline_xml)
  File "/data/Git_Projects/capsul/capsul/engine/__init__.py", line 341, in get_process_instance
    instance = self.study_config.get_process_instance(process_or_id,
  File "/data/Git_Projects/capsul/capsul/study_config/study_config.py", line 718, in get_process_instance
    return get_process_instance(process_or_id, study_config=self,
  File "/data/Git_Projects/capsul/capsul/study_config/process_instance.py", line 163, in get_process_instance
    return _get_process_instance(process_or_id, study_config=study_config,
  File "/data/Git_Projects/capsul/capsul/study_config/process_instance.py", line 402, in _get_process_instance
    result = create_xml_pipeline(module_name, None, process_or_id)()
  File "/data/Git_Projects/capsul/capsul/pipeline/xml.py", line 251, in create_xml_pipeline
    x = float(gui_child.get('x')())
TypeError: 'str' object is not callable

I'm working out the current unit test issues for this branch, please don't add to it :-))))))))))))) !

denisri commented 2 years ago

Yes it's already fixed (see earlier)

servoz commented 2 years ago

Oh yes !!! Indeed, the problem has been solved for capsul, but rather comes from this morning's commit in populse_mia ...

LStruber commented 2 years ago

Did you recreate your pipeline (by actually running it in the pipeline_manager_tab) or did you open a former pipeline ? The writing of the xml string was wrong but has been fixed now. It's working on my side

servoz commented 2 years ago

Oh OK ! The old databases are no longer valid (this successive obsolescence is going too fast for me!). I have just pushed a new project_8 in the resources for UT and test_brick_history() is working again. Thank you.

On the other hand, taking the example above, if we click on the smooth_1 button: It seems to me that the delay before displaying the history is much longer than before (I hope it is not dependent on the size of the pipeline, because here it is only a process smooth!). And if we click again on the button, it goes much faster but we see the exception in the stdout:

Exception ignored in: <function PipelineDeveloperView.__del__ at 0x7fe6bf05eb80>
Traceback (most recent call last):
  File "/data/Git_Projects/capsul/capsul/qt_gui/widgets/pipeline_developer_view.py", line 2699, in __del__
    self.release_pipeline(delete=True)
  File "/data/Git_Projects/capsul/capsul/qt_gui/widgets/pipeline_developer_view.py", line 2817, in release_pipeline
    self.setScene(None)
RuntimeError: wrapped C/C++ object of type PipelineDeveloperView has been deleted
LStruber commented 2 years ago

I also get this exception, I need to work on it. It ressemble to an error we observed few months ago, on a ticket about renaming iterated process (if my memories are good)... But the delay is unperceptible on my side... Let's see with bigger pipelines

servoz commented 2 years ago

Ok I just checked and I observe this slowness only on the host. When I am in casa_distro I don't observe it. It may be related to this old ticket!!? In a nutshell: Nipype uses etelemetry to do internal things (check for available updates etc). We normally disconnect this in the main.py of mia (os.environ['NO_ET'] = "1"). I have always suspected a difference for this between casa_distro and the host. But without any certainty and I never took the time to see on it because it didn't seem to be a priority until now ... maybe we'll have to look at this ticket one day !

LStruber commented 2 years ago

At what time does the error occur ? I can't trigger the problem. I can even load it in populse_mia (load pipeline, then copy/paste your xml string in the pipeline line), and see the pipeline boxes, edit values etc. I haven't tried to actually run it, but if I understand the problem is earlier ?

The problem does not occur when building the pipeline (here node controller can be accessed), but only when we show it from history... I'll investigate