sofa-framework / sofa

Real-time multi-physics simulation with an emphasis on medical simulation.
https://www.sofa-framework.org
GNU Lesser General Public License v2.1
871 stars 297 forks source link

[PSDE] Derive input type from parent value #629

Closed marques-bruno closed 5 years ago

marques-bruno commented 6 years ago

Hi,

I love @sescaida's PythonScriptDataEngine, I find it extremely useful when you don't want to implement a whole component to perform very basic actions. Though I find it cumbersome to have to manually specify the type of the input/output datafields. While I have no idea (yet) how to fix this for the component's outputs, I implemented a small method to try to derive the input type from the linked value. We could also add scalar types to this method, to be able to handle passing scalar parameters directly from python. Also, I think it would make more sense to use directly the output of

datafield->getValueTypeInfo()->name()

instead of an arbitrary char when expliciting the datafield type in the PSDE factory.

If you have any idea how I could do something similar for output fields, I'm taking any advice :)


This PR:

Reviewers will merge only if all these checks are true.

damienmarchal commented 6 years ago

Hello Bruno,

Glad you like the feature.

On my side I made: https://github.com/SofaDefrost/sofa/commit/b6819568e465ca6d0d5258133e0bd6c89a983f4a

Here is what I was considering for this component:

EDIT: manually specifying the type must be preserved and override the type infered because I see situation in which we could like to coerce type (eg using [f] instead of infered [d] or using [RGBColor] instead of [Vec3]).

EDIT2: I forgot to say that using datafield->getValueTypeInfo()->name() is a perfectly sounded alternative to using the python array encoding.

DM.

marques-bruno commented 6 years ago

Hi Damien, Thanks for the input,I see now why a different type naming was used for the factory, I didn't know it was the type "encoding" for python.

I also think that explicit data typing is a necessary feature to override an implicit type introspection. In my plugin for instance I have some "relatively complex" data structures that are reduced to basic numpy arrays when passed to python, and for which type introspection from python back to cpp wouldn't give the correct type.

Also, wouldn't it make sense to completely override the datatype::name() string for the python style typename, already in the sofa DataTypeInfo, not just in the python factory? It would break scenes that explicits template types, but it would be more consistent.

Looking fwd to see how things goes

damienmarchal commented 6 years ago

About changing datatype::name...I'm not sure it is a good idea to put python style into Sofa. For consistency I would much more prefer to swtch the factory to the sofa style (as you did).

marques-bruno commented 6 years ago

So if I understand correctly what you mean, the python type would be what one writes explicitely in "datatype=" and it would be converted to a sofa-style type before requesting a creator in the factory?

Wouldn't it be better to stick to one naming convention? it seems to me that for every convention we would add, we would have to add manually a typename conversion code, which would eventually enlarge the code base in the bindings.

damienmarchal commented 6 years ago

It is good to increase the consistency,

For the data factory we are discussing about I see two possible consistent conventions that are the python.array one or the sofa one. The two looks ok to me as one make sense as we are working with Sofa, the other make sense as we are in python. After some thinking I wonder if using the Sofa one is probably a slightly better choice as we will be able to handle sofa specific data type and structure like "vector" without bending the python convention that knows nothing about RGBAColor.

The second point was about changing the function parentData->getValueTypeInfo()->name().c_str() so that it returns a python style typename. On that aspect I think it is a bad idea to break backward compatibility to adopt a python convention for some internal behavior in Sofa.

Finally having multiple conversion is not surprising as this is what a binding often do i.e: converting to/from third party convention (eg: python / javascript / lua).

damienmarchal commented 6 years ago

Smoothing the syntax of PSDE:


#sofadataengine ferrait un truc du genre scanner les paramètre de la fonction, créerait des 
# data input du même nom, créerait un (ou plusieurs) data output et dans la fonction update 
# appelle myenginefunction

@sofadataengine
def myenginefunction(indata1, indata2):
    return indata1 + indata2 

def createScene(rootNode):
     a = rootNode.createObject("OglModel", name="a")
     b = rootNode.createObject("OglModel", name="b")
     rootNode.addObject( myenginefunction("@a.position", "@b.position")  )
marques-bruno commented 6 years ago

I generated a list of all the uniques Data typenames present in the caduceus scene's components (on the left hand side, 1 example of datafield name matching to given typename:

BaseData::getValueTypeInfo()->name() BaseData::getName()
bool listening
double zFar
fixed_array<Vec3d,2> box
float exponent
int pivot
Mat4x4d transformation
Quatd orientation
ResizableExtVector<Edge> edges
ResizableExtVector<int> vertNormIdx
ResizableExtVector<Quad> quads
ResizableExtVector<Triangle> triangles
ResizableExtVector<Vec2f> texcoords
ResizableExtVector<Vec3f> bitangents
std::set<int> group
string name
TagSet tags
unknown bbox
unsigned int shadowTextureSize
unsigned short textureUnit
Vec2f translationTex
Vec2i localRange
Vec3d direction
Vec3f scale3d
Vec3i dataResolution
Vec4f showColor
vector<double> compliance
vector<Edge> edges
vector<fixed_array<string,2>> suffixMap
vector<fixed_array<unsigned int,4>> highOrderEdgePositions
vector<fixed_array<unsigned int,5>> highOrderTrianglePositions
vector<fixed_array<unsigned int,6>> highOrderTetrahedronPositions
vector<fixed_array<unsigned int,8>> highOrderHexahedronPositions
vector<float> projectionMatrix
vector<Hexahedron> hexahedra
vector<int> indices
vector<Mat<24,24,double>> stiffnessMatrices
vector<Pentahedron> pentahedra
vector<Pyramid> pyramids
vector<Quad> quads
vector<string> pluginName
vector<Tetrahedron> tetrahedra
vector<Triangle> triangles
vector<unknown> groups
vector<Vec2d> uv
vector<Vec3d> reset_velocity
vector<vector<int>> facets
vector<vector<unsigned int>> polygons

Interesting to see that there are some "unknown" typenames in Sofa...

Concerning the idea of reducing the syntax to its minimum when creating 1-instruction engines, I think that in terms of user experience, it would be ideal if an engine could be created by doing something like this:

node.createObject('Component1', name="1")
e = node.createEngine('PSDEMagic', name='2', myIntVector="@1.vector_out")
e.addNewOutput('myIntOutput')
e.update = lambda e: e.myIntOutput = e.myIntVector[0]
node.createObject('AnotherComponent', name="3", myInt="@2.myIntOutput)

Or even better but I don't see any technical approach for it:

node.createObject('Component1', name="1")
node.createEngine('PSDEMagic', name='2', myIntVector="@1.vector_out", update=lambda magic:(magic.myIntOutput = magic.myIntVector[0]) )
node.createObject('AnotherComponent', name="3", myInt="@2.myIntOutput)

EDIT: markdown didn't like the angle brackets.. now it shows all the types!

damienmarchal commented 6 years ago

Hi thank you for the list, this is very informative. In the current state we clearly cannot trust that to feed the factory.

So to me we have to add in the data factory base types like:

So that we already have a valid convention for the create data that is 'rich' and usable explicitely from python.

Then, for the automatic type deduction...isn't it possible to create a "clone" of an existing data without having to even have to get a typename ?

damienmarchal commented 6 years ago

About the last syntax your are proposing, I think it shouldn't be hard to do.

You simply have to in the c++ construction of the PSDEMagic dedicated code that get the argument named 'update', then extract its parameters (is callable, param names), transform all that as input and generate the output. Actually this is very similar the use of decorator except that with decorator this "magic binding" logic is implemented in python while in your case the "binding" logic is done in the c++ part.

I will continue digging in this issue.

marques-bruno commented 6 years ago

@damienmarchal I didn't understand what you mean by "creating a clone of an existing data without needing a typename" could you explain?

damienmarchal commented 6 years ago

Sure...what we want to do is to create a new data with the exact same type of the data pointed by a link. So making a clone of an existing data (to get its type) and then initialize its parent would do the trick without needed all the factory and datatypename things. The problem is that if I remember correctly from @ErwanDouaille this was far from trivial to implement.

damienmarchal commented 6 years ago

Here is a first version of a MagicEngine in python

#!/usr/bin/env python
# -*- coding: utf-8 -*-
from MagicEngine import *

def myupdate(a, b, c): 
    """This is so cute to have a single point documentation in our code"""
    return a + b + c

def createScene(rootNode):
        rootNode.createObject("MeshVTKLoader", name="loader", filename='mesh/liver.vtk')

        MagicEngine(rootNode, myupdate, a=1.0, b=2.0, c=3.0)

With the following in MagicEngine.py:

#!/usr/bin/env python
# -*- coding: utf-8 -*-

import Sofa
import os
import inspect

class MagicEngine(Sofa.PythonScriptDataEngine):
    def __init__(self, node, fct, **kwargs):
        self.addNewData("Description", "Properties", "", "s", inspect.getdoc(fct)) 

        argspec = inspect.getargspec(fct)
        for arg in argspec.args:
            argval = None
            if arg in kwargs:
                argval = kwargs[arg]        
            elif argspec.defaults != None and arg in argspec.defaults:
                argval = argspec.defaults[arg]       

            if argval != None:         
                self.addNewInput(arg, datatype="f", value=argval)        
            else:
                Sofa.msg_error("Unable to create an input for parameter: "+str(arg))           

        self.lastreturnedvalue = False                                
        self.name = str(fct) 
        self.params = argspec.args
        self.fct=fct

    def update(self):
        calld = []
        for param in self.params: 
            calld.append(self.findData(param).value) 

        tmp = self.fct(*calld) 
        if not self.lastreturnedvalue:
            self.addNewOutput("output", datatype="f", value=tmp)  
            self.lastreturnedvalue = True
        self.findData("output").value = tmp

Basically it creates MagicEngine engine that is a DataEngine, pass the function and argument to it. In the constructor of this MagicEngine the function is introspected to add the needed inputs. As there is no type inference I used fload but this should be easy to add. The output is a bit tricky and I don't like it...as it requires the MagicEngine to be executed a least one time before to work. I will try a different approach where we can specify manually the types of intput & output.

damienmarchal commented 6 years ago

The one which use a decorator to specify the return types..

The look&feel for the client code:

from TypedMagic import *

@sofaengine( ret=("customout", "s"), inputs={"a" : "i", "b" : "i"} ) 
def typedupdate(a, b, c): 
    """This is so cute to have a single point documentation in our code"""
    print("MY UPDATE:" + str(a))     
    return "Yolo" + str(a + b + c)

def createScene(rootNode):
        rootNode.createObject("MeshVTKLoader", name="loader", filename='mesh/liver.vtk')
        TypedMagicEngine(rootNode, typedupdate, a=1.0, b=2.0, c=3.0)

EDIT: the "customout" will be the name of the output field.

With more or less this in TypedMagic


class TypedMagicEngine(Sofa.PythonScriptDataEngine):
    def __init__(self, node, sofaengine, **kwargs):
        fct = sofaengine.function
        retinfo = sofaengine.retinfo
        argsinfo = sofaengine.argsinfo
        self.addNewData("Description", "Properties", "", "s", inspect.getdoc(fct)) 

        argspec = inspect.getargspec(fct)
        for arg in argspec.args:
            argval = None
            ### Get the name & default value
            if arg in kwargs:
                argval = kwargs[arg]        
            elif argspec.defaults != None and arg in argspec.defaults:
                argval = argspec.defaults[arg]       

            ### Get the type              
            datatype = "f"
            if arg in argsinfo:
                datatype = argsinfo[arg]

            if argval != None:         
                self.addNewInput(arg, datatype=datatype, value=argval)        
            else:
                Sofa.msg_error("Unable to create an input for parameter: "+str(arg))           

        if retinfo != None:
            self.addNewOutput(retinfo[0], datatype=retinfo[1])  

        self.retinfo = retinfo[0]
        self.name = fct.__name__ 
        self.params = argspec.args
        self.fct=fct

    def update(self):
        calld = []
        for param in self.params: 
            calld.append(self.findData(param).value) 

        self.findData(self.retinfo).value = self.fct(*calld) 

class sofaengine(object):
    def __init__(self, ret=("output",None), inputs=[]):          
        self.retinfo = ret
        self.argsinfo = inputs

    def __call__(self, fn):
        self.function = fn
        return self 

Of course this is just a POC (lot of corner cases are not handled) ...It is also very easy to implement the two TypedMagic and MagicEngine in a single piece of code instead of two very similar ones.... and in python 3 we could use annotation...but as we are 2.7..we cannot.

damienmarchal commented 6 years ago

If you like this kind of trick maybe you will also like the drafty 'Animation' framework that is in stlib as it allows you to replace a lot of fat PythonScript controller with code like that:

  def myAnimation(target, factor):
       ### This function will be called at each animation step, 
       ### target is the object passed in the dictionary passed in the animate function.  
       ### factor is a floating value indicating where we are between 0.0 and 1.0 in the animation. 
       target.position = Transform(target.position).translate( 0.0, LinearRamp(-3.14/2, 3.14/2, factor), 0.0 ) 

def createScene(rootNode):
    c = rootNode.createObject("MechanicalObject", template='rigid')
    animate(myAnimation, {"target" : c}, duration=1.0, mode="pingpong")
damienmarchal commented 6 years ago

Last comment (sorry :))

In the magic versions of data engine...it could also be possible to pass the 'self' object so that we can change multiple values as output of the data engine.

marques-bruno commented 6 years ago

Thanks @damienmarchal for this already really nice POC on how to make a really simple to use Python DataEngine!

Concerning the registration of typename creators in the factory, I was wondering if the PSDEDataFactory couldn't somehow be merged together with the PythonFactory, currently used to bind types to python?

It's very quickly going to be unmaintainable otherwise I think. with the most basic matrices, vectors, scalars, basicTopologicalElements and their vector version alone, I already have 50creators to register in the PSDEFactory, all of which were already somehow registered in the PythonFactory, for a different purpose, sure, but I think it could be combined. Plus I have my own python bindings in my plugins, & I wouldn't like having to register them in sofa's Binding_Base.cpp file. I'd already be more ok to have a way to do this directly from my plugin (like it's done using the SP_ADD_CLASS_IN_FACTORY macros from SofaPython.. but again, why not have this macro doing all the work by itself?

damienmarchal commented 6 years ago

@marques-bruno Great list of types :)

I think it would make sense to make this data factory a real factory with its own .h/.cpp, and set of method to add/remove element to it. This PSDEFactory (Actually a DataFactory) may be an attribute of PythonEnvironment.

Now about merging the PSDE with the PythonFactory, this sound is a different issue to me. As far as I can say the PythonFactory is in charge of generating the python binding of the 'right' type for an object (so a BaseObject* to the right SofaPython.Binding_SpecificObject) while the PSDEFactory is more in charge of allocating the right Data<> type (a string to Data). I think a closer factory to look is how the Data<> are converted to and from python type (eg when we are accessing them with the getattr/findData).

I didn't investigated to much but I also have several worries about the existing PythonFactory:

sescaida commented 5 years ago

Hi guys!

Thanks again, Bruno, for your suggestions! I'm happy to read you find the PSDE useful. I especially like your extension to allow for for automatic derivation of the types for Inputs. This was also one of my top ideas to improve the PSDE, but I didn't have any more time to pursue this development. In the next days I'll test your Pull Request and try to give feedback on the different topics discussed here!

sescaida commented 5 years ago

Hey guys,

so, I checked the PR and I have the following comments:

Sure...what we want to do is to create a new data with the exact same type of the data pointed by a link. So making a clone of an existing data (to get its type) and then initialize its parent would do the trick without needed all the factory and datatypename things.

I also do think that the existing linking mechanism could be complemented by a cloning feature that creates a copy of a BaseData object. Probably just a clone()-function in BaseData. This would avoid the implementation of a “heavy-weight” factory, like Damien said. To add this data to the correct object (the PSDE) it is only necessary to set the owner, I think. Finally, linking is done by using setParent(). In this way the user can program freely in C++ without worrying about whether his newly defined compound type needs to be made available in the factory.

1) The Output of the PSDE is connected to an Input of a sofa-object that has static datas, e.g., sofa-objects whose code is written in C++. An example for such an object would be MechanicalObject (MO) that always has the same data (position, velocity, force, …).

2) The Output of the PSDE is connected to a scripted sofa-object, like another PSDE or similar.

For case 1.:

It could be possible to define a special BaseData with a name but vacant type. This is a kind of “forward declaration” for the Output. Then, by extending the linking mechanism one could think of the following: At the time of parsing, a sofa-object (e.g. MO) will establish the link between one of its Inputs to the Output of a PSDE (i.e. PSDE->MO). Since the type of the Input is clear (we have static sofa-object whose BaseData doesn’t change) one could at this time create the clone and replace the BaseData with vacant type at the PSDE with the desired one. After that, this BaseData for the Output is valid and can be linked to in a normal way.

I don’t think this is especially dangerous, because this would not be allowed after parsing(). This also handles the problem that the sofa-objects involved would need to be created before the PSDE is created. In this way they can be created after the PSDE. This doesn’t cause confusion for the user neither, because he is aware of the types of the datas.

Yet, overall this solution is a little bit complicated because it relies on changing the behavior of linking. Linking has to be extended to detect whether the Output that is being linked to has a valid type or is vacant. Overall I’m not 100% confident this kind of approach would work.

For case 2.:

In this case one kind of remains in the “python-world” and it is not necessary to have an exhaustive set of possible types, I think. It would be enough to have types such as string, int, float, array, matrix, etc. Here one could use the identifiers already used in python. Also, the user has to state the type he wants for the Output, since there is no way of inferring automatically when both ends are subject to the user’s choice. Therefore, this would be limited to declaring the desired type of an Output. The types for the Inputs should always be derived automagically, I think.

I hope this ideas are useful!

Cheers!

marques-bruno commented 5 years ago

Thanks @sescaida for your propositions / suggestions. I'm unfortunately not able to take a look at this in the moment, I hope I will by mid-june.

marques-bruno commented 5 years ago

Inputs / outputs of PSDEngines are now automatically derived from their linked values, just the way @sescaida suggested :) This is very nice now, because it should work with every SOFA type seamlessly. @damienmarchal we probably still need the PSDE's factory, to allow for user-specified input data fields (such as translation="0 1 0" for instance, since we can't possibly derive the type from a non-existing parent data...). We should decide which convention we would use for that! Next steps would probably be take advantage of kwargs to create input fields on-the-fly in the createObject python method, instead of having them hard-coded in the parse() method: createObject('PythonScriptDataEngine', name='PSDE', myCustomInput1="@../comp.val" ....)

sescaida commented 5 years ago

I don't know if this still an adequate proposition, but I was thinking that it might be also a nice idea to change the name of the PSDE to something that is intuitively more graspable. Some suggestions could be (in order of preference):

PyScriptableComponent PyFunctionBlock PyComponent

I don't think that at this time we would be breaking to much code by changing the name.

damienmarchal commented 5 years ago

I would vote for PythonEngine (and rename PythonScriptController into PythonController)

sescaida commented 5 years ago

Hello guys! Is there any news regarding the updated features? @marques-bruno : Can you make a small update to the PR with some examples for the functionality:

I'll discuss with Damien about the forwarded inheritance mechanism to verify that we are on the right track.Then, after checking the PR with examples, we can see the next steps more clearly, i think.

marques-bruno commented 5 years ago

Hi @sescaida, Thanks for following up on the subject :) I came back from holidays this week, that's why it didn't go much further since the STC. I've updated the examples, that shows how to use both:

As I said in my previous comment, I'd hope to go further now that I'm back, especially on the aspects of setting input types as kwargs to the PythonScriptDataEngine component. I will also rename the PythonScriptDataEngine into whatever we decide. Damien suggested PythonEngine, I'd be more for PythonDataEngine to keep the consistency between the bound component's name and the Pyhton name (PythonDataEngine -> DataEngine) What do you think?

sescaida commented 5 years ago

Ok, great! I will check out the examples and give you feedback next week.

Regarding the naming: This could end up being a lengthy discussion. My reasoning is that names like "DataEngine" have their origin in how things are implemented. I think from a user perspective it is easier to understand names like "PyScriptableComponent" or even "PyComponent", because that is what they are: components, just like any other in Sofa, with Datas, Inputs and Outputs and a certain behavior, which can be implemented in python. Of course there's the disadvantage of breaking the name consistency. If we keep the "DataEngine" name somehow, I vote for PythonDataEngine. Saying it is also "Script" is redundant.

marques-bruno commented 5 years ago

I think from a user perspective it is easier to understand names like "PyScriptableComponent" or even "PyComponent", because that is what they are: components, just like any other in Sofa, with Datas, Inputs and Outputs and a certain behavior

This is not really correct actually: The pythonScriptController is not really a component (i.e a Sofa class inheriting BaseObject) as it possesses extra features, such as the possibility to create the scene graph from scratch, send events to other controllers etc. It is not a purely simple 1to1 binding of its C++ methods in Python. In that way, I agree with you that calling it a PyController makes more sense than to call it a PyBaseObject for instance, or pyComponent, since it does more. The current PSDE is actually as far from a Std sofa component (core::objectmodel::BaseObject) as the PythonScriptController: the PSDE is much closer in terms of behaviour to that of Sofa's DataEngine components: the key difference is that DataEngines, contrary to BaseObjects, is a call for action on the modification of one of its input datafield. The update method is not present in a BaseObject or in a PythonScriptController. In that way it completely mimics the behavior of the DataEngines in Sofa.

Hence my suggestion :)

sescaida commented 5 years ago

Aha! Well, just to be clear, my suggestion would have been PythonScriptDataEngine-> PyComponent and PythonScriptController -> PyController. I understand that in terms of implementation, even conceptually, a DataEngine is different from a BaseObject. But, this information is of no use to the user in my opinion. When he sees the PSDE he expects it to be updated when the Datas have changed. If I'm new to Sofa and I see a component that is called "PythonScriptDataEngine" I figure that it must be something very specific, related to an engine of some sorts. If I see "PyComponent" I understand immediately that this is a component in my scene whose behavior I can implement in python. But granted, it doesn't take much to explain in a documentation that what is hidden behind the name "PythonScriptDataEngine" is just that.

marques-bruno commented 5 years ago

I think the problem you are raising here is a more general issue in Sofa. "Engine components" should be used seamlessly in Sofa, without the need for a distinction. After all, as you said, they are nothing but "components" in the sense that they are put in your python scene just like the others, with a createObject, and are visible in the scene graph, just like the others. Sadly, DataEngines in Sofa needs to be manipulated with care, because their behavior changes drastically from their BaseObject counterpart: They do not respect the classical execution order of the scene graph, they can be called multiple times per animation steps (or not at all), etc.

If their use in Sofa wasn't so different from the rest of the Sofa components, I would also vote for a simpler name, such as PyComponent vs PyScript for the current PythonScriptController.

Newcomers in Sofa tend to mix up DataEngines with standard components, ending up with behaviors that they do not understand, as their code in not executed when they expect it to be for instance. But I guess the subject is drifting a little bit towards the Data-update topic on gitter ;) Maybe for now it's not so important to rename anything, and we should maybe wait to see how things evolve with the animation loop and usages of DDGNodes in Sofa...

sescaida commented 5 years ago

Ok, I think you are touching on stuff which is a little bit beyond my current knowledge about Sofa. In the current implementation of the PSDE update() is triggered by the onBeginAnimationStep-event. I'm not sure update() is triggered by a change in some of the Data. I'll look into it when I check your current version.

As you said, if there's some more fundamental discussions going on it makes sense to stick to the current naming convention for consistency's sake.

marques-bruno commented 5 years ago

@sescaida I just took a look at the code of the ScriptDataEngine class. You are partially right, the current implementation is the following:

// Ok, so in the end we're stuck with using the AnimationBeginEvent? (20.02.2018, sescaida) void PythonScriptDataEngine::handleEvent(Event *event) { if (AnimateBeginEvent::checkEventType(event)) { setDirtyValue(); update(); } } Here clearly, update is called everytime a AnimateBegin event is sent to the component. If it were a "normal component" (i.e. if it were a BaseObject-based component) that would be the only moment that update() method would be called. But the ScriptDataEngine also inherits core::DataEngine. This is what I talked about during my presentation at the STC#5: DataEngine both inherits from DDGNode & BaseObject. The update function is "manually called" from handleEvent, but it actually shouldn't be, because this is redundant to how DataEngine works, which is having its update() method called when and only when a component, taking as an input the output data of that engine, calls getValue() on that data, WHILE any of the engine's INPUTS are flagged dirty.

So basically, while some people call update() in their handleEvent method, this should never be done, except in some (quite) twisted cases. Btw, it took me quite some time to understand that.. few months ago, all my engines were calling update() in handleEvent, and I had a lot of problems because of that.. ^^

Technically, in the case of the scriptDataEngine, if you remove entirely the call to the update function in handleEvent, and if your dependency graph is well built, you should get the behavior you want

sescaida commented 5 years ago

@marques-bruno I see how that "on-demand"-processing mechanism prevents useless computation by the DataEngine. Do you think we should strive to preserve that behavior and unlink the triggering by an event? From my part it needs more testing to see the behavior.

Returning to the issues part of this PR:

marques-bruno commented 5 years ago

Hi @sescaida @damienmarchal,

I made it possible to pass inputs to PythonScriptDataEngines directly through the call to createObject(), which makes it more similar to how components are created in the SceneGraph normally. the example in PSDEAverageEngine.pyscn show the behavior.

There are a few downsides in the current implementation though:

Looking forward to your opinion :)

marques-bruno commented 5 years ago

@sescaida as a reply to you previous comment:

guparan commented 5 years ago

[ci-build][with-scene-tests]

marques-bruno commented 5 years ago

[ci-build][with-scene-tests]

marques-bruno commented 5 years ago

@sescaida I'm closing this PR: I've split it into 3 separate PRs, each dealing with a separate aspect of this one #740 #742 #743 I can't seem to be able to add you as a reviewer / assignee on github anymore though, for some reason.. :/