pyblish / pyblish-base

Pyblish base library - see https://github.com/pyblish/pyblish for details.
Other
127 stars 59 forks source link

Data flow for GUI, Maya and Logic #23

Closed mottosso closed 10 years ago

mottosso commented 10 years ago

I've got an idea of how to structure the flow of data within the Publish components.

The two proposals boils down to:

At this point, we've got three components:

1. Host: Contains the information we're validating/testing,
   along with persisting the settings we specify for each instance.

2. GUI: Visualises these settings and allows the users to modify them.

3. Behaviour: This is what actually computes - e.g. validating,
   extracting, parsing selection.

Responsible UI

One way of delegating responsibility between these components is this:

flow-or-information2

(4.) Maya transfers persistent settings into the GUI where users can then modify them (5.) Once all is clear, the information is passed from both GUI and Maya onto Logic which performs computations. (6.) Data is persisted within GUI; meaning externally as JSON/YAML etc.

The disadvantages of this approach are:

  1. Responsible UI; crash, misbehaviour or corruption influences output
  2. Difficult to debug; both for users and developers as all is contained within the GUI
  3. Duplicated data; Maya will have to persist some settings, and the UI will need to stay up to date

    Responsible Host

Alternatively, we could give Maya the full responsibility of data persistence and use the GUI for visualisations of modifications only - no persistence.

flow-of-information

(1.) Both the GUI and Users are permitted to modify the configuration, (2.) All data is stored within Maya (3.) Once configured, the GUI is no longer necessary, and Maya can communicate solely with Logic.

The benefits of this approach is:

  1. Simplified UI; less responsibility, can focus more on making it pretty.
  2. Simplified debugging; the UI would be optional, all can be done via Host
  3. Script-ability; users could potentially write their own GUIs or tools and disregard ours.
  4. Batch/distribution of publishing is free
  5. Single source of information/truth

Let me know if anything is unclear.

Best, Marcus

achayan commented 10 years ago

Hey Marc,

I think Responsible Host method is much more easier and better approach. But the only concern about this is the variation of hosts and apps.

BigRoy commented 10 years ago

Hey Marcus,

I've taken the liberty to write out an initial idea for the Selection architecture on the "Architecture->Selection" wiki page since it was empty anyway.

Basically I would opt for a more 'dynamic' Selection method as described there and have the actual 'selection' data be local to publish as it is dynamically created. Then the Validators can access the resulting Selection. This allows us to set up a preset of what kind of objects should be grabbed for a type of publishing. Then even if extra objects are added to the scene they will propagate nicely in the next publish.

Often settings for a publish isn't that scene specific, but more project or pipeline specific. Thus you wouldn't want that be described within the scene (thus per scene), but preferrably saved out externally into a JSON format. What do you think?

mottosso commented 10 years ago

I think Responsible Host method is much more easier and better approach. But the only concern about this is the variation of hosts and apps.

Yes, this is the main concern I think. This approach assumes that all the supported hosts support a minimum set of features. Off the top of my head, that might be:

  1. Storage of arbitrary data; mainly strings, as these can be parsed into anything.
  2. Ability to easily edit this data; i.e. not just via scripting

Most do - e.g. Maya, Nuke, Houdini - but some doesn't - Photoshop, Zbrush. I supposed its also a matter of choosing how low to cut our "lowest common denominator" (LCD) in this regard. For example, if we were to stick with the limited support of Photoshop, be might have to jump several hoops in other apps which supports much more. On the other hand, if we support all that Maya has to offer, we might have to work with hacks in Photoshop et. al. to mimic the behaviour.

To get the discussion going, I'd initially opt for looking at the LCD of Maya, Houdini, Nuke and Mari and treat Photoshop and ZBrush et. al. as necessary sacrifices that we'll probably have to hack.

mottosso commented 10 years ago

I've taken the liberty to write out an initial idea for the Selection architecture on the "Architecture->Selection" wiki page since it was empty anyway.

Great initiative, Roy, thanks for that. In fact, I'd encourage everyone to do the same; either stick with the structure I've set out, which is partially empty, or add new ones if your idea doesn't fit. The wiki isn't documentation so don't worry too much about saying "this is how it works" even though there isn't anything that works today. Have a look at Roy's post for guidance.

Basically I would opt for a more 'dynamic' Selection method as described there and have the actual 'selection' data be local to publish as it is dynamically created.

I'm with you, this sounds like the way to go. It'll please everyone, including newcomers without pipeline experience as they would get bundled alternatives, whilst also appease the technical-minded as they'd be capable of customising their own modules.

By the way, and I don't mean to split hairs, but "dynamic" sounds to me like "self-changing" as opposed to "modular" which sounds closer to what you mean. Modular in the sense that each step in the process is replaceable and upgradable. Am I understanding this correctly?

Often settings for a publish isn't that scene specific, but more project or pipeline specific. Thus you wouldn't want that be described within the scene (thus per scene), but preferrably saved out externally into a JSON format. What do you think?

No, I'm with you on this one. The scene-specificness of things thus far is mainly for the initial discussions and to keep things real. Once tools get going, more and more of these things are bound to get abstracted.

Having said that, I still think it's worth keeping the option of configuring within a scene per hand a viable alternative. When shit hits the fan, it's always good to know that you can directly manipulate the outcome by hand. It also facilitates users in writing scripts to manipulate it themselves, or write their own GUIs; as they know exactly where data is stored. And could also facilitate the caching of metadata, in cases where reads are guaranteed not to be read from an external source anymore, such as after a project has been archived or sent to a third-party/outsourcing company.

And finally, the data we write in each host - like startFrame and author etc - would ideally be picked up from elsewhere - such as from metadata about the asset. I would suggest the flow of information here to be the following:

  1. Upon publish, look for metadata in the local scene, such as startFrame
  2. If none is found, look elsewhere

That way, each scene could be overridden.

Thoughts?

BigRoy commented 10 years ago

By the way, and I don't mean to split hairs, but "dynamic" sounds to me like "self-changing" as opposed to "modular" which sounds closer to what you mean. Modular in the sense that each step in the process is replaceable and upgradable. Am I understanding this correctly?

Correct! What I meant with dynamic was that it would get things from the scene based on a set of rules rather than a static hand-made selection. It's definitely meant to be modular with the Selector component description I've tried to set up.

Having said that, I still think it's worth keeping the option of configuring within a scene per hand a viable alternative. When shit hits the fan, it's always good to know that you can directly manipulate the outcome by hand. It also facilitates users in writing scripts to manipulate it themselves, or write their own GUIs; as they know exactly where data is stored. This is also something that could be easily added as a Selector component. One could implement a 'publishWhitelist' and 'publishBlacklist' objectSet-like component that can be manually tweaked.

And could also facilitate the caching of metadata, in cases where reads are guaranteed not to be read from an external source anymore, such as after a project has been archived or sent to a third-party/outsourcing company.

I think what type of (meta)data should be written out should be just as modular, probably with Extractor components that specify different type of 'export/saving' solutions based on the Selection.


A tricky thing is that publishing (especially point caches, simulations and animations) also have a 'duration', 'start frame', 'end frame', 'frames per second', etc. Currently the dataflow never describes where the final exporting solution gets this information to export the correct frames.

Maybe we might need to reiterate the naming of Selection and call Context. Then we could create Context component that make up the data required for Validators and Extractors to do their thing. Not necessarily all data (as it wouldn't include the actual mesh information), but more a resulting static set of data like selection, frame range, resolution. This way the whole publishing situation becomes modular and configurable. Thoughts?

mottosso commented 10 years ago

Maybe we might need to reiterate the naming of Selection and call Context.

Albeit a tad more abstract (read "scary") than Selection, I agree that it better describes its responsibilities. I think, let's keep the word at the back of our heads for now, and if it sticks, we'll transition.

A tricky thing is that publishing (especially point caches, simulations and animations) also have a 'duration', 'start frame', 'end frame', 'frames per second', etc. Currently the dataflow never describes where the final exporting solution gets this information to export the correct frames.

Good point. I can see two sources of this information, please fill in if you can think of more:

  1. Ad-hoc; users look it up in their Google Spreadsheet or ask their coworkers and type it in manually
  2. External; data is fetched from something like Shotgun or from JSON/YAML relative the project.

In the ad-hoc situation, what Publish provides should be sufficient; i.e. adding attributes to the Selection/Context as in Issue #15, or via a provided user interface. The data would be stored within the scene and persist. The issue is then when multiple scenes - such as animation and lighting - both persist this data and the data changes.

External information is trickier for us as we'd need a method of fetching information. One method, considering that what we're building may be considered a framework, is to do something like IoC, or Inversion of Control and to provide an empty method/function for users to inject their sources of information. Publish could then run this method when looking for external information, regardless of whether or not anyone has actually overridden it.

Expressions

Just a quick thought for a possible, albeit unique, solution. We could allow for expression in our configuration fields. That is, instead of saying:

# Example 1
startFrame = 5

We could instead say:

# Example 2
startFrame = @my_tool.get_start_frame()

Publish could then treat any line starting with a "@" character as code and dynamically fetch the information. This would:

  1. Keep all customisations in one place
    • whereas alternatives might involve pure-scripting or overriding of classes that wouldn't be immediately visible to users; such as in the IoC suggestion above.
  2. Refrain from mandatory customisation for those who won't need it
    • as well as allow for a smooth transition for users going from Example 1 and Example 2
  3. Mesh with an otherwise manual, as in Example 1
    • in that users could replace an expression with a value at any time.
  4. Maintain our ability to cache any results
    • by just replacing the expression with its evaluated value.
  5. Remain explicit
    • which simplifies debugging
BigRoy commented 10 years ago

Good point. I can see two sources of this information, please fill in if you can think of more:

  1. Ad-hoc; users look it up in their Google Spreadsheet or ask their coworkers and type it in manually
  2. External; data is fetched from something like Shotgun or from JSON/YAML relative the project.

We usually rename the camera in such a way that it contains the start and end frame that is being used. That's because the camera will also traverse down the pipe (thus the time range) and we can easily spot if we're still on track. Also this moves the 'time range' responsibility to the animator who's most likely to last touch a scene for its timing. (And a validator that warns that the cam name isn't equal to the currently used time frame in the scene will almost always spot mistakes there.)

The issue is then when multiple scenes - such as animation and lighting - both persist this data and the data changes.

Such a problem is always present - even if the data would auto update - unless there would be a way to produce assets in a non-destructive manner and have changes auto propagate. Lighting might know about a new time frame, but will always require 'an update' from animation. Note that keeping track of such changes is more of an asset management / production task then up to the actual publisher, though a validator (that takes any data) could of course be present without any problems.


Context, Validations and Extracting

This is how I currently (after discussing all these bits) imagine the process. Publishing is always a (pre)defined series of actions of the following types: Selection, Validation, Extraction.

  1. Selection: Defines the Context
  2. Validation: Acts upon a Context to check whether it acts according the Validator's rules.
  3. Extract: Acts upon a Context to save/create a new resource.

Where Validation and Extraction is basically an Action that requires a Context. Preferrably Validation and Extraction should never change the Context so that the Context is solely defined by the Selection.

Thus publishing the current selection for the current time slider range as an Alembic (skipping validation for now) would result in something like:

# We need to define a context by performing a 'stack' of Selectors
stack = ContextStack(type=MayaContext)
stack.add(MayaCurrentSelectionSelector)
stack.add(MayaCurrentTimeSliderRangeSelector)
context = stack.process()

# If you need only a single Selector for a specific set of Actions then you could
# skip adding them to a stack entirely and only process your single selector.
# Note: After this point the context only knows about the current time, but doesn't
# have any information about the nodes required for an extraction.
context = MayaCurrentTimeSliderRangeSelector().process()

# We could also manually create the context, though it is not recommended
context = MayaContext()
selection = maya.cmds.ls(sl=1)
context.nodes.add(selection)
timerange = (0, 100)
context.timerange.set(timerange)

# Now we need to use the context to perform the extraction
extractor = MayaAlembicExtractor(context)
extractor.process()

Maybe I need to think about this some more. Also:

"I have some use cases where one needs to edit the Context after Validation for a specific publish, yet I think to avoid complexity we should definitely avoid such a Use Case at the beginning. Like validate the whole scene, yet export a subset into separate extractions. Though I do think there might be other ways to tackle that Use Case that'll favor simplicity over optimization."

mottosso edited: Added python formatting to the codeblock

mottosso commented 10 years ago
stack.add(MayaCurrentSelectionSelector)
stack.add(MayaCurrentTimeSliderRangeSelector)

In this example, how do you envision each selector to know which data to influence?

E.g. the MayaCurrentTimeSliderRangeSelector adds a timerange attribute, whereas MayaCurrentSelectionSelector adds nodes.

Would you think to define fixed input/outputs on each selector, e.g.:

           __________________
          |                  |
Maya ---> o CurrentSelection o---> (list): nodes
          |__________________|

           ________________
          |                |
          | RangeSelection o---> (int): fps
Maya ---> o                o---> (float): startFrame
          |                o---> (float): endFrame
          |________________|

Such that, physically, within Maya, each output of each selector adds an additional attribute to the configuration, e.g. the objectSet?

BigRoy commented 10 years ago

I think the Selectors operate on a Context object that pushes through the described Stack. You can either use a single Selector so it initiates an empty Context and adds information to that. Or use the ContextStack to build up a Context from multiple selectors.

Upon stack.process():

  1. Stack creates an empty Context of the required type.
  2. Each Selector in the Stack receives the Context object and updates it.
  3. The resulting Context is returned.

So basically the Stack is automating the following:

context = Context()
context = MayaCurrentSelectionSelector().process(context)
context = MayaCurrentTimeSliderRangeSelector().process(context)

The reason I made a separate Stack that operates on the whole set is because it makes more sense in my head in combination with a UI. I imagine having a list of Selectors that I process as one whole unit to provide one output.

The resulting Context object doesn't need to persist in the scene. (It could with it's own extractor that saves into the current scene, but it doesn't need to). Or do you think there's a reason for requiring to store it in the scene?

Possibly one could also combine Context's of the same type in code (like updating a dict). This could be nice for custom implementations outside of the UI but isn't a requirement for the flow we're setting up with the UI.

context1.update(context2)

Would you think to define fixed input/outputs on each selector?

I think it's good to have defined inputs/outputs per selector, yeah... but also to have a defintion per Validator/Extractor what kind of data it needs from the Context. For example I could validate a mesh's normals without needing to know about a time range. We must be able to warn the user if the Validation/Extraction can't operate on a Context that is missing information. Just like how a Validator requires a type of context it can define what data it _requires_ from that Context.

BigRoy commented 10 years ago

While writing the following pseudocode I found out that basically Selectors, Validators and Extractor have a core similar behaviour (they operate on a Context). So I called them Actions

Here's my pseudocode:

# Base Classes
# ============
class Context(object):
    pass

class Action(object):
    """ Action that operates on a predefined Context.
        An action will have to define what type of Context it requires as well as
        what data on such Context it uses. This way we can raise errors when one of
        the dependencies is not met with a certain Context. 
    """
    __context__ = Context # The type of Context the action operates on

    def __init__(self, context=None):
        if context is not None:
            self.context = context

    @context.setter
    def context(self, c):
        assert isinstance(c, self.__context__)
        self.__context = c

    @property
    def context(self):
        return self.__context

# The different types of default Action that operate on a Context: Selector, Validator, Extractor
class Selector(Action):
    def process(self, context=None):
        if context is None:
            context = self.__context__() # Instantiate an empty Context if None provided
        raise NotImplementedError("Must be implemented in child class")

class Validator(Action):
    def process(self, context):
        raise NotImplementedError("Must be implemented in child class")

class Extractor(Action):
    def process(self):
        raise NotImplementedError("Must be implemented in child class")

# A Maya implementation example
class MayaContext(Context):
    pass

class MayaCurrentSelectionSelector(Selector):
    __context__ = MayaContext

    def process(self, context):
        selection = maya.cmds.ls(sl=1)
        context.nodes.update(selection)
        return context

class MayaMinimumTenNodesValidator(Validator):
    __context__ = MayaContext

    def process(self, context):
        nodes = context.nodes
        if len(nodes) >= 10:
            return "success" # or: 1?
        else:
            return "failure"

class MayaExportNodesExtractor(Extractor):
    __context__ = MayaContext

    def process(self, context):
        nodes = context.nodes
        maya.cmds.select(nodes, r=1) # select so we can quickly do export selection
        maya.cmds.file(es=True, pr=True, type="mayaBinary") # untested, consider as pseudocode
mottosso commented 10 years ago

Would you mind enabling Python formatting on your code? Would make it much easier to read!

E.g.

 Python code
mottosso commented 10 years ago

Thanks Roy! I think I understand most of it, and I really like the similarities between the Actions! To really drive the point home however, do you think it would be possible for you to attempt at getting your solution up and running on the test scene and submit a pull-request?

It's the same set-up used in the recent pull-request and has a basic model with a selection set with configuration attached as user-defined attributes.

If we try running our proposals against this scene, or derivatives of it, we'd be able to compare and contrast our ideas in an organised fashion.

BigRoy commented 10 years ago

Will definitely try soon! Currently there's also a lot of stuff going on at work so sorry if it takes a while. :)

mottosso commented 10 years ago

Take your time, looking forward to seeing it in action (pun!)

BigRoy commented 10 years ago

Made a start (see my first prototyping commit). There's no action performing code yet, but it does show the structure I might be going for. I'm still so in doubt about whether it's the right direction. It definitely feels configurable and modular with all these components, though I might be abstracting it too much.

What do you think?

mottosso commented 10 years ago

Nice! If you create a pull-request for it, we could pull it locally from there. https://help.github.com/articles/checking-out-pull-requests-locally

BigRoy commented 10 years ago

Will do when I have code that is a working example. ;)

mottosso commented 10 years ago

I wouldn't worry too much about that. I'd love to take a look even now to try and understand it better.

BigRoy commented 10 years ago

Ok. What's considered good practice regarding forking/pull-requests.

I tend to think further down the project one only sends a pull-request upon finishing a new feature. Or at least have a working example. Does one ever just showcase their own forked repository by just telling "hey, look here!" or would that always be in the form of a pull-request. So I mean exactly in case like this where my last commit leaves the code in a prototyping/WIP state.

Either way, I'm not at a computer now so I'll send a pull request later today I guess. ;) You could also temporarily clone my forked repository I guess. Not sure about the whole Git workflow yet, but I'll get it down along the way.

By the way thanks for setting up the project and gaining some interest for it.

Marcus Ottosson notifications@github.comschreef:

I wouldn't worry too much about that. I'd love to take a look even now to try and understand it better.


Reply to this email directly or view it on GitHub: https://github.com/abstractfactory/publish/issues/23#issuecomment-51437214

mottosso commented 10 years ago

Ok. What's considered good practice regarding forking/pull-requests.

I think we'll have to work this out as we go. But the way I see it, a pull-request doesn't necessarily mean it'll get merged into the main repo, but more a way of opening up an issue with code attached that people can pull from.

Additionally, once a pull-request has been submitted, you can still commit to it. Which means you can pull-request your current prototype, we'll discuss it and test it, you'll commit along the way and when/if it ends up working, we'll merge it and move on to the next step.

Down the line, yes, it probably makes more sense to only request finished, or at least working features. But I think it's better we cross that bridge when we get to it. At this point, I think more is more (as opposed to less is more), so go crazy!

I've summarised a few bits here: Contributing but we'll tailor it based on how we feel about things and have talked about it properly.

mottosso commented 10 years ago

@BigRoy, I was looking through some of your code and re-read your posts on Tech-artists about actions and something occured to me.

There is an architectural style known as "pipes-and-filters" in which multiple objects, with a shared interface co-operate by processing each other's outputs; similar to the traditional node-graph workflow in Maya et. al.

If I got your idea, what you refer to as an Action is what in pipes-and-filters is called a Filter. If so, then this should facilitate a node-graph style workflow in designing these filters which could potentially even get a GUI similar to Hypershade and the like which I think may prove useful in understanding complex sequences of selections/validations/extractions along with conform.

BigRoy commented 10 years ago

Yes, a node graph was exactly what I was going to in my head. But I figured it might be a bit too far (complex) for the scope of this project. Seems so far off from the original described design for publish. Nevertheless it would allow the freedom I was searching for in publishing and validations.

mottosso commented 10 years ago

I could imagine a few reasons why not to go this way:

  1. Unfamiliarity - I for one haven't worked this way before.
  2. Initial complexity - Might take some time to grasp and thus hinder development and newcomer contribution.

But I can imagine other reasons why going this way is beneficial:

  1. Marketing - It's simply pure cool and would give us a direct edge over anything existing or upcoming which would in turn bring in more contributors and users. (This is possibly more important than the features itself)
  2. Motivation - Who doesn't want to work with node graphs? We're all reaping its benefits daily.
  3. Simplified long term - Though it probably takes longer to set-up as developers, it most likely speeds up development once started.
  4. Usability - If our theory works, then nodes should prove useful both to developers integrating Publish, but also artists creating new processing chains as they could be given visual coding.

I set-up a page for Publish Designer here

Let's make it happen. :)

BigRoy commented 10 years ago

Here's some pros and cons for a node graph system:

Node Graph

This would be a node graph like what is shown in the Node Editor in Maya or what you'd get from a node based compositing package like Nuke or Fusion.

Pros:

  1. Can be configured to allow for complex system
  2. Recognizable system for flow of data.
  3. Each action (=node) really is a black box only operating on data of inputs and outputs.
  4. The flow of the process could still be visually shown as a 'list/queue' to artists hiding much of the complex circuitry behind the publishing situations.

Cons:

  1. If we use an attribute system like Maya than one might need to connect many attributes the whole time. If we disallow this by making one big 'attribute' connection we're disallowing many of the customization options.
  2. Setting up a simple system becomes much more complex as well. We'll need to add Validations to a queue, but also connect everything in order.
  3. Also hard to setup in code because of the connections required.

List/Stack

A list that gets processed in the order that the Actions are provided.

Pros:

  1. Simple and elegant
  2. Reordering is pretty much drag 'n' drop behaviour.
  3. The order of processing is readable on first sight as compared to node graphs that can become a spiderweb of connections.
  4. Adding is a simple process (one inserts or appends to the queue)
  5. Once an Action is removed it's clear what will happen with the rest of the queue; it will just shift. In complex graphs with many connections such an automatic reconnection might be impossible. (eg. deleting a certain node from history in Maya is too complex for an automated task, because connections tend to go everywhere because of the amount of attributes)

Cons:

  1. Systems can only define a 'for loop' system where data isn't much configurable.
  2. The context can grow big (even though unlikely?)
  3. The Selectors must exactly output the data the Action requires in type and name.
  4. One can't use the same Selector in one context as it would just override the same data.

So... something in between would be best! But I'm not sure what that would look like, but let's look at what I like:

A feature I like from Nuke is that there can be many layers inside a single 'pipe' of connection. Once it gets to a node it provides a drop-down many to choose what layer to operate on. We could also use something like that where outputs add data to a context in one single big pipe, which I would preferrably see as going in a single direction thus a Queue. Keeping that in mind we could describe something like this:

Magic Mix?

For example:

class MayaTimeRangeSelector(publish.core.Selector):
    __context__ = MayaContext
    __outputs__ = publish.datatypes.Double2("timerange") # [start, end]

The default output of the above selector adds a 'timerange' value of type Double2 to the context. Any other Action that has a Double2 input can use this value by selecting it in the actions dropdown box. By default an Action's inputs connect to values in the Context with the same name, if not existent it will find one with the same type and if not found it will stay greyed out until it has the required data from its inputs.

Then the idea is that the power user can also edit what name an output gets (like on a Selector). For example in one Context I could provide two node list. Picking the above Selector as an example and adding it to the queue will give me a context with Context.timerange. Now if I add the same Selector to the queue another time, yet configure the output's name to "timerange_with_handles" then I have a single context with: Context.timerange and Context.timerange_with_handles.

As before, let's say I would delete the Selector that creates the _timerange_withhandles data, then Actions that require that data would get greyed out. The queue could warn/error on trying to process it and the UI could also easily grey out the Action because it's out of context.

Pros:

  1. Visually and in process it's still a single queue.
  2. Reordering is pretty much drag 'n' drop behaviour.
  3. The order of processing is readable on first sight as compared to node graphs that can become a spiderweb of connections.
  4. Adding an Action is a simple process (one inserts or appends to the queue), though some configuration might be needed (see "Cons").
  5. Once an Action is removed it's clear what will happen with the rest of the queue; it will just shift.
  6. Same selectors can be duplicated in the list and add new data to the Context.
  7. Because data in Actions is only based on Types rather than also on Names it might become easier to mix Actions between studios if they use different naming conventions for the data.

Cons:

  1. The context can grow big (even though unlikely?)
  2. The Selectors must exactly output the data the Action requires in type (but user can define name during configuration.)
  3. The workflow might be a bit unfamiliar at first, so it would require some clear instructions/tutorials.
  4. Not sure how the code based workflow might be here.

By the way, just saw your post pop up Marcus. Some good points!

mottosso commented 10 years ago

"Responsible Host" is how we've been approaching things so far, by storing attributes on nodes within Maya. It's worked really well I would say and will consider this matter closed!