mkolar / maya-look-hookup

Maya shader network transfer
MIT License
3 stars 0 forks source link

Transfering look method #1

Open BigRoy opened 8 years ago

BigRoy commented 8 years ago

Goal

Define what's a suitable method for transfering a lookdev. Things to consider are:

We store solely the shaders and don't care about the meshes. All we need alongside the shaders is a "unique identifier" that can be used to know which object in the scene should get which shader. (Preferrably don't rely on names, but on a unique identifier attribute the meshes have).

This is a method that I've been succesfully using in production.

The data is small and only "what was added by the lookdev artist". The output look relations file doesn't contain any meshes.

  1. Mesh-storage

This method would store the geometry with all its "look assignments" in a separate file (eg .ma). As such these meshes would be connected with an animated cache so that it "feels like" the shaders were transferred onto the cache (even though it's the cache which was actually piped into the lookdev created mesh).

I remember reading this is a method @mkolar has used in some projects.


This might be too "broad" of an issue, but I think it's a good start for discussion. (Or shall we move this to a forum, eg. Pyblish?)

mkolar commented 8 years ago

I'd go with something along the lines of method 1. I've looked through our old code and we actually used conbination of the to as a final solution. Stored fully shaded mesh, but didn't apply the animation from alembic to it, but only transfered the ShadingGroup from static to animated, then hid the static mesh.

Here is the code for reference Bait shader hookup

I've also tried the solution that the guys implemented as a test in shotgun toolkit (mentioned here). Original code: tk-config-simple shader export tk-config-simple shader hookup My distilled version: maya-look-hookup.py It works quite well as a start, but is very dependent on the names and isn't particularly flexible in terms of what it actually transfers. Also saving the data on a script node is one hell of a hack, albeit a successful one.

All we need alongside the shaders is a "unique identifier"

The main question is what to use for this. Meaning how do we refer to a mesh other than it's name. UUIDs? Metadata nodes? I think @tokejepsen used red9 metadata in some rigs. Maybe he could chime in with his impressions about them.

This might be too "broad" of an issue, but I think it's a good start for discussion. (Or shall we move >this to a forum, eg. Pyblish?)

Forum would be good, however it's not really directly related to pyblish. How does @mottosso feel about off-topic threads like this?

Sharing shaders

Very important. I'd imagine eventually publishing shaders this way into a studio material library as well for potential pulling into projects. Even though in case of publishing to a studio wide area, it could just simply be stripped of the association info inside the file.

How do we identify which object should get which shader from the pipeline?

I opened #2 to discuss how to technically store this information.

How does it know where to get the shader?

Assuming we already referenced both the animated mesh and the shader into the scene, but they are not connected, we could walk through available shaders, look the mesh identifier on them, find the given mesh in the scene and connect. Unless you meant where to find it on the filesystem, in which case I think that might be outside of the scope of this tool.

mottosso commented 8 years ago

Forum would be good, however it's not really directly related to pyblish. How does @mottosso feel about off-topic threads like this?

I've got no problems with it; the more pipeline discussions the better, especially if they have something to do with publishing or even tie back into Pyblish somehow.

Use the "Pipeline Development" category.

tokejepsen commented 8 years ago

The main question is what to use for this. Meaning how do we refer to a mesh other than it's name. UUIDs? Metadata nodes? I think @tokejepsen used red9 metadata in some rigs. Maybe he could chime in with his impressions about them.

Think Metadata Nodes are a bit overkill for this. They are useful when having hierarchal nodes. They are basically "network" nodes with data stored on them. I would definitely store the data directly on the meshes, and UUIDs seem like a good option.

Just to add to the conversation, that lookdev might not always be able to boil down to sets and attributes. The current project we are working on, we heavily use Maya Paint Effects (pfx) in lookdev. As this is a Maya node, that gets assigned to a mesh, I would opt for storing the mesh with the lookdev output. I've found storing the mesh and transferring the animation to it, you work around a lot of custom code for "edge" cases that you didn't think of when developing the tool.

mottosso commented 8 years ago

I've found storing the mesh and transferring the animation to it, you work around a lot of custom code for "edge" cases that you didn't think of when developing the tool.

  • "Where is the latest version of this mesh?", asks Durian.
  • "It's in the lookdev scene.", says Pierre

I'd imagine questions like these appearing, which doesn't rub me the right way. You dodge a few bullets in exchange for others. Or do you keep the mesh in multiple places? Both here, and in the original modeling folder?

tokejepsen commented 8 years ago

You wouldn't be altering the mesh in the lookdev scene, so its a reference of the modeling.

You end up with a two level reference in the lighting scene: modeling > lookdev.

mkolar commented 8 years ago

You end up with a two level reference in the lighting scene: modeling > lookdev.

I see this as the biggest problem with this approach, however it could be solved quite easily, by either transfering the shaders, than unloading the second nested reference. Or simply importing the shaders from the reference and then discarting the reference from the scene.

This might actually be safer in general, not to have referenced shaders in the lighting scene.

Both here, and in the original modeling folder?

Nope. If for nothing else, then simply for the strain this might put on the network. Loading sometimes really heavy meshes twice. Plus what @tokejepsen mentioned.

BigRoy commented 8 years ago

Sweet to see everyone hop in and join the conversation.

I think the PaintFX or anything really shouldn't become a problem. They are just additional data. We don't need a mesh to have a paint effects node?

It's just that I don't feel the renderable object itself should be in the published look file. Preferrably anything that gets added during lookdev is within the lookdev file and the connections are re-applied afterwards.

Let me see if I can get to a draft example the coming week.

BigRoy commented 8 years ago

Sorry about that, wrong button on my phone here.

mottosso commented 8 years ago

As this is a Maya node, that gets assigned to a mesh, I would opt for storing the mesh with the lookdev output.

It sounds like what a shader is? Could you not export just the pfx node, and re-assign it afterwards, like a shader?

tokejepsen commented 8 years ago

Maybe, but haven't figured out how to do that ye as its not as easy as assigning the Maya pfx node. It was more to future and the workflow proof the tool, so you don't have to do a lot of coding for everything you would wanna do in a lookdev session.

mottosso commented 8 years ago

Ah I see. Well in that case I would likely put a stronger definition on what a "look" really is and base the implementation on that, rather than some all-around kitchen-sink solution.

For example.

If you look at the .ma file once you've exported a shader from Maya, a shader is really just a name like "Lambert" and a series of key/value pairs, like {color: #33ffaa}. What if the bulk of look development could root itself around this simple two-step definition of a name and key/value pairs?

[
  {
    "name": "bodyShader",
    "type": "lambert",
    "color": "#33ffaa"
  },
  {
    "name": "arnold1",
    "type": "arnoldstandard",
    "ioa": False
  }
]

Once you know about the data you are producing, you could start looking into making connections.

{
  "bodyShader": [
    "polyCube1_GEO",
    "head_GEOShape[233:500]",
  ],
  "arnold1": [
    "polyTorus1",
    "polySphere1"
  ]
}

You could end up with a rock-solid Python library, you could develop GUIs for it that remain focused and that would be extendible to (most) every other studio out there who does any work with shaders. You could develop importers into other applications, by just mapping say "Lambert" to whatever equivalent there was, like in Nuke. Which means you could potentially start lookdeving in Nuke, and continue in Maya or Katana or whatever. It would be simple, small and relevant to any future involving the name of a shader, and a series of attributes.

For anything that doesn't fall under that umbrella, a separate, binary more ad-hoc "misc" file could be exported alongside of it. From there it would be a matte of finding patterns within that blob from which to develop another solid and robust library of tools. That blob is then the very definition of what you don't yet understand well enough to pipelinify. And maybe I wouldn't let it's existence prevent you from developing towards what you already do understand deeply.

BigRoy commented 8 years ago

Note that this data can grow a lot and you're basically rebuilding the .ma format to something of your own. Not saying that it can't be done, just thinking it's overkill if the repository is Maya focused. I can see benefits of something software-agnostic (maybe there's already something like that out there?).

In that case you're building a data-set that stores unique identifiers for objects (names, uuids, or whatever), node types, along with relationships like connections. Of which much won't translate that easily to other applications and making the benefits a bit less. For example: A VRayMaterial with a file node and a place2DTexture node connected to a layeredTexture, etc. How would that translate to other applications?

But the format you mention seems alright as a start. Once we have a list of what kind of data there is and what we want to include in our "data-file" we know where to start. Having some of it (e.g. shaders) in a .ma file alongside our relationships file means we have a sidecar file to take care of. Anyway, both have up/down-sides. Extracting the material from a Maya to .ma can already be done through Maya. Translating it all to a custom format would need that built as well.

mottosso commented 8 years ago

maybe there's already something like that out there?

That's exactly my point. There would be, if this got built, in which case we'd all be improving that, instead of reinventing and rebuilding the wheel to too exact specifications each time. :)

For example: A VRayMaterial with a file node and a place2DTexture node connected to a layeredTexture, etc. How would that translate to other applications?

I'd imagine that if your studio uses this node, and expected to use it in some other application, than that studio would need to consider an alternative in that application, like Katana. But if you aren't using it in other applications, then you've got nothing to worry about.

Just a thought; I personally prefer building upon the most pure understanding of things such that I can take the result of that and extend upon it, mix it with some other equally pure solution, as opposed to the behemoth of ad-hoc and custom code that would go into a "I do it this way, so it's safe to make assumption A and B"-kind of implementation.

tokejepsen commented 8 years ago

Note that this data can grow a lot and you're basically rebuilding the .ma format to something of your own. Not saying that it can't be done, just thinking it's overkill if the repository is Maya focused. I can see benefits of something software-agnostic (maybe there's already something like that out there?).

I would agree with @BigRoy, that it seems a bit overkill. As for something already out there; http://graphics.pixar.com/usd/

BigRoy commented 8 years ago

If USD supports connection relationships alongside attribute values and node types it could actually be an interesting fit. (It'll give some "wow" to the whole implementation.) Yet it's not publicly available yet, is it?

Anyway, as stated both have their benefits and downsides. I think better decisions can be made once those become clear and we know the requirements for what defines a "look".

tokejepsen commented 8 years ago

This is very specific to Maya, but you could maybe store the changes you made in the lookdev to an offline file; https://www.youtube.com/watch?v=y3HhJWP9yoI

mottosso commented 8 years ago

This is very specific to Maya, but you could maybe store the changes you made in the lookdev to an offline file

I would definitely explore that!

BigRoy commented 8 years ago

This is very specific to Maya, but you could maybe store the changes you made in the lookdev to an offline file; https://www.youtube.com/watch?v=y3HhJWP9yoI

Sounds interesting since it only holds the changes that were "added" by the lookdev artist.

It's a format to store changes to your referenced nodes. As such it does hold some "issues" of Maya that we're actually trying to work around (running on assumptions here):

Maya assigns edits by matching each nodename.attribute in the edit file to the file it’s being applied to. For example, you can export a reference edit for pSphere1.translateX in the scene sphere.ma. This edit is saved in the reference file as

:pSphere1.translateX. You can then assign this edit to the scene ball.ma so that ball:pSphere1.translateX is edited.

  • The lookdev has to be done on referenced meshes to be able to create a "offlne file". This is something that we're already doing in our pipeline. Is this something we can lock down and say that'll always be true?

Just pointing this out. Not saying it's not worth investigating into. Since it only holds "additional data" or "changes" it could even act as a simple intermediate step to retrieve just that; worst-case it could be an intermediate step (though then I wouldn't recommend it).

tokejepsen commented 8 years ago

Sounds interesting since it only holds the changes that were "added" by the lookdev artist.

Done some very superficial testing, and it seems like it only hold changes to the reference, not any created nodes from the lookdev scene. Probably not a very good option after all:)

BigRoy commented 8 years ago

Thanks for testing! I guess we can ignore that feature now we know it won't be a good solution.

mottosso commented 8 years ago

Since you're talking about spending a significant amount of time developing a solution, followed more time spent learning, using and improving upon the solution, and then taking that time multiplied by the number of users involved, I wouldn't be this quick to discard new ideas.

For example, what if you could meet the technique half-way in terms of workflow? That is, rather than working as usual, and hitting Export Offline and expecting that to work for you, what if you could instead start by creating a reference from the scene, building the look on-top of that, and then export an offline?

It'd mean a small change to how you work and think, with a potential of this technique to work. And even if that suggestion don't, maybe there are other small or large adjustments that could be made where the sum of effort is reduced.

All I'm saying is, maybe it could be worth trying this out fully; to build a minimal pipeline using this technique, working around whatever obstacles pops up, until you have a holistic view whether the effort put in is more or less than what you get out. If you could try 5 or more different techniques in full, you would have a pretty educated view on what works best.

BigRoy commented 8 years ago

Sure, but I don't think we're discarding it because of that. The fact is that it doesn't export additional nodes that are required to define the look. (For example the materials). Theoretically you could export the materials (extra nodes that need to be connected to the references) as .ma together with a .maEdits file for the reference edit of the mesh. Though the thing is that the materials would only be "linked" in the reference edits file by name.

The only reason to try and adopt it would be if it would provide any benefits. Yet it doesn't seem to? Nevertheless valid points that we should keep our options open until we really have defined the requirements of the tool.

mkolar commented 8 years ago

So....

what is a look ? Boiling this down to a comprehensive definition for our purposes might be the biggest challenge here I'm afraid. For some assets, it's just a bunch of shaders, for others it might be shaders, special attributes on the mesh (i.e. subdivision levels), paintfx and who knows what else. Quite frankly it might be worth asking whether paintfx, groom (yeti, xgen, S&H) are part of lookdev or a slightly separate thing. I'm afraid that trying to encompass all of these in the first instance of the tool, will be a major overkill even though we ourselves, use xgen for lookdev. Exporting that beast without troubles though is one of the biggest struggles we encountered

For this reason mainly (and yesterdays strungle of trying to transfer grooms from lookdev to shot) I'm actually quite inclined towards @tokejepsen suggestion of storing the full lookdev file with the mesh. However as mentioned before, this mesh should always be referenced in, so we don't actually duplicate it's data and this mesh reference can always be disabled leaving us with only the lookdev changes.

Another option that we might test is using a proxy object per shader(a simple sphere perhaps) that we use and the placeholder for the final one. We could make this exact copy of the shade mesh, apart from the actual shape of course, with all the connection and attributes.

When reconnecting we would just take all the connection from the shaded proxy and transfer it to animated mesh. This might even be able to take all the non shader extras with it.

reference edits I'm not a fan of the idea for the exact reasons mentioned here. Primarily not storing new nodes. As far as I know, lookdev is mostly about new nodes.

There would be, if this got built,

for such a big thing, I'd rather wait till USD is out as it seem, that it will be able to solve loads of pipeline transfers, just as alembic did with animation.

The lookdev has to be done on referenced meshes to be able to create a "offlne file". This is something that we're already doing in our pipeline. Is this something we can lock down and say that'll always be true?

Absolutely. I'd go for it and say, that we'll be doing lookdev purely on referenced meshes.

BigRoy commented 8 years ago

Defining what is a "look"?

what is a look ?

Actually. It seems we might be better off to take the most generic approach here. What if it's less about a "look" (referring to shaders) that a lookdev artist does but more about the overall relationships between what is the mesh and what it's used for (connected to or edited for) to get a certain behavior. For example the fact that it's connected to a specific shader will render it as such. The addition of attributes with a specific value also "influence" in such a way that its a change the lookdev artist intended. Basically any change done by the lookdev artist, even connecting it to some strange paint effects object... was likely intended to get the effect they were looking for in the render.

Keeping it more generic like this we can say that a "look" is any change done to a set of nodes (e.g. meshes). Such generic changes (basically anything) is actually known when it's a referenced node, e.g. reference edits. Thus by using this information (addition of attributes, connection/relationship status, etc.) to set up a so called "relationships file" we can reconnect it as such.

Relationships and edits

Going one step further and abstracting this... imagine all nodes in the scene can be found in a way with a "identifier", including the shader nodes, paint effects nodes, or anything really. This way we can store the connections (e.g. to shaders or other sets) and addAttr/setAttr (e.g. v-ray attributes) for all the referenced meshes along with their identifier. Discarding the idea of even thinking about what a look is, but considering every change.

Since it's such a broad concept it might be just easier to create a method to define all these "relationships and edits" in a way that they can be re-applied for everything (even translate, rotate, scale... anything reproducable). Maybe it's much easier to filter out what we know we don't want, such as translation, rotate, scale, changes in vertex positions?).

This might actually make this useful even beyond the concept of a look, but even allow you to create a nice output file that allows you to connect e.g. a simulation rig to a cached character using the identifiers.

Does that make any sense?

Referencing once

However as mentioned before, this mesh should always be referenced in, so we don't actually duplicate it's data and this mesh reference can always be disabled leaving us with only the lookdev changes.

This is (for me personally) a very important behavior for something like shaders (sets, and such) so that it doesn't clutter the work environment for the lighting artist. Yet this "one to all" connection might not work for something other than a shadingEngine. (E.g. I noticed xGen was mentioned, paint effects or hair/fur simulations). These are often nodes that relate to a single mesh and as such need a unique object (reference?) for each instance of the asset. The two furry squirrels in your scene can't connect to the same "fur" object since that's not how the node works (basically most cached formats?).

It's important to think about this for a bit and decide on which options do we have? Can we find easily which nodes only "operate on one" instead of "one to all" basis. And what do we do if we find them? Reference again? Warning messages? What's the user-expected behavior.

BigRoy commented 8 years ago

Regarding that generic approach of extraction here's a quick gist I put together to retrieve all the reference edits. Using this information we can retrieve the exact data that was altered in e.g. the look development scene and store only that in a proprietary format as "edits" to objects that can be retrieved by "identifeirs" as described here.

Another way would be instead of "taking the information from the meshes" is to have the shaders, sets, etc. and anything that was added as input for an extract_look method and from there find where they are related to a specific reference (as such how they are affecting a mesh as a look).

So the two options here:

  1. Get the information of the "look" from the shapes/meshes
  2. Get the information of the "look" from what the artist thinks is the look (so the shaders, sets, etc.)

Both will have some tricky things to tackle.

For example for option 1 we'll need to tackle the issue of what kind of edits are considered relevant for any "look". Though the core technique of retrieving edits there can be more abstract and generalized (and as such more maintainable code?) where the look code itself filters to what is relevant information (and could be customized even to a pipeline).

For option 2 you might get the issue where data is missed because they are not on "additional nodes", like those renderable attributes on the shapes themselves. So I guess option 1 is more feasible since getting those "attributes" on the shapes would still mean to process themselves and we'd actually be reverting somewhat to step 1.

mkolar commented 8 years ago

quick gist I put together to retrieve all the reference edits.

Just a quick note on this as I don't have time for bigger response right now (I generally agree with everything you said though). You can actually also query reference for any nodes that are affecting in in some way.

edits = mc.referenceQuery(ref, editNodes=True)

This results in a list of objects that are somehow connected to object from the reference node. For example materials, but also follicles, paintstrokes...anything. The nice thing is that it effectively filters out anything that doesn't add to the lookdev, like camera created for the preview.

Than combined with yous gist could give us everything fairly easily.

BigRoy commented 8 years ago

You can actually also query reference for any nodes that are affecting in in some way.

This is totally true. Though we need to also take note what that is that it's specifically changing (e.g. connecting to) on the referenced nodes so we can store that particular edit. (Probably with sets we don't want to store the "dgSetMembers" connections, but store as a "set" relationship. But we could easily parse that yes.

Here's another gist I had a stab at to parse the full MEL edit string to something that we can easily retrieve some information from, e.g. the particular node that the command is operating on.

For example with this information it should be trivial to retrieve both nodes from any connectAttr edit or the parent node in parent edits. Anyway, this allows us to hook up all edits to the "identifier" mentioned in Store shader assignment data topic.

This will get us to a pretty good point I think.

tokejepsen commented 8 years ago

I may have a decent solution for maya look dev/alembic hook up. Its the opposite way of working compared to what you have outlined in the repo, but it seems to be working quite well.

When we export the alembic we get the mesh and transform animation data, which is what we would want to transfer to the look dev mesh/nodes. If you connect the attributes from the alembic nodes to the lookdev nodes, you get a setup that is very reference and update friendly.

Since the lookdev nodes are connected to the alembic nodes, and not connected directly to the alembic node in Maya, when you update the alembic with completely new animation the reordering of the alembic attributes doesn't matter.

Currently we are only connecting translate, rotate, visibility and inMesh attributes, but you could potentially look at custom attributes as well.

There is still the discussion about how to figure out which nodes to connect together, but here is an example of doing it by the basename;

def Connect(src, dst):

    pm.undoInfo(openChunk=True)

    alembics = src
    if not isinstance(src, list):
        alembics = pm.ls(src, dagObjects=True)

    targets = dst
    if not isinstance(dst, list):
        targets = pm.ls(dst, dagObjects=True)

    attrs = ['translate', 'rotate', 'scale', 'visibility', 'inMesh']
    for node in targets:
        for abc in alembics:
            if node.longName().split(':')[-1] == abc.longName().split(':')[-1]:

                for attr in attrs:
                    try:
                        pm.connectAttr('%s.%s' % (abc, attr),
                                       '%s.%s' % (node, attr),
                                       force=True)
                    except:
                        print traceback.format_exc()

    pm.undoInfo(closeChunk=True)
mkolar commented 8 years ago

I may have a decent solution for maya look dev/alembic hook up. Its the opposite way of working compared to what you have outlined in the repo, but it seems to be working quite well.

To be honest, after last few months, I too think that this is the most stable way forward. I didn't have time to work on this repo, but we've done some testing inbetween projects, and applying alembic animation to 'lookdeved' mesh is way more straightforward than trying to export the full look without the mesh and reapply it to the alembic.

In terms of code it's also much less complicated and in my Houdini filled head, it aligns better to what we do in there. (having shaded asset with parameter that only takes path to alembic file)

mkolar commented 8 years ago

Figuring out what nodes to connect together I'd go with slightly layered approach.

  1. Match objects with the same 'lookid' attribute (which could have the form {asset}/{shape} as discussed in #2 )
  2. Match to object with the same shape name
  3. Match to object with the same transform name

That way it could work on production level complicated assets, but also on smaller things that for whatever reason didn't pass through the full pipeline (lookid assignment), provided that the names match of course

tokejepsen commented 8 years ago

There is a correction to the above code. This is current code we are using;

def Connect(src, dst):

    pm.undoInfo(openChunk=True)

    alembics = src
    if not isinstance(src, list):
        alembics = pm.ls(src, dagObjects=True)

    targets = dst
    if not isinstance(dst, list):
        targets = pm.ls(dst, dagObjects=True)

    attrs = ['translate', 'rotate', 'scale', 'visibility']
    for node in targets:
        for abc in alembics:
            if node.longName().split(':')[-1] == abc.longName().split(':')[-1]:
                if node.nodeType() == 'transform':
                    for attr in attrs:
                        pm.connectAttr('%s.%s' % (abc, attr),
                                       '%s.%s' % (node, attr),
                                       force=True)
                if node.nodeType() == 'mesh':
                    pm.connectAttr('%s.worldMesh[0]' % abc,
                                   '%s.inMesh' % node,
                                   force=True)

    pm.undoInfo(closeChunk=True)

Notice that the worldMesh[0] is hooking into inMesh. Might need refining for the future.