Closed bentoogood closed 10 years ago
Maybe I'm misunderstanding, but does the filter node not define the set already? Once you make one filter, can you plug it in to all the nodes that need the equivalent filter?
Yep - you can share filters that way, and I think that'll be fairly common when you're in charge of the entire node graph. But when we start referencing in scripts from other people and having them feed out of boxes as a single connection, we'll need a way to pass these named sets along with that connection. Does that make sense? So lookdev might make a set called "allTheMetalThings" and it'd feed down with the scene and be available for use in a filter.
I think we'll implement this by storing PathFilterData in the scene globals...
This is indeed related to tags isn't it. If you can get the global PathFilterData auto updating when the scene hierarchy changes, then you've basically got tags haven't you... (I think this auto updating is desirable, because it makes the workflow more robust)
Yep - I think auto-updating is a reasonable expectation, and pretty doable.
People have already asked for PathFilter paths to automatically be updated when they change the scene hierarchy in some way, but I don't think that one is reasonable (or possible). Because a Filter can be used in many places, and because a node can arbitrarily change the hierarchy (including using expressions and doing it differently in different contexts), it's not reasonable to modify the plug values on a PathFilter to match - because it's a static value that can't be got right for all those possibilities. But with PathMatcherData flowing through the graph as data, the node modifying the hierarchy can make the appropriate modifications to the data at the same time. Hierarchy modifying nodes already have to do this for the forward declaration lists for lights - which I think could just be reimplemented to use the new mechanism we're discussing.
The one tricky part of auto-updating the PathMatcherData would be where it contains wildcards - how do we remap "/scene/objects/house*/.../nonUniqueNameWhichIsModifiedAtOnlyOneLocation"? Is it reasonable to abandon remapping after the first wildcard in a path is hit?
Would you just bake wildcards into something more explicit when converting to a filter set? Filter sets seem like something you'd expect to be explicit anyway, and there isn't really a way of keeping them implicit if people start defining them with more exotic filters, like I dunno - bounding box filters or the union filter.
Of course you run the risk of these things getting pretty big if you go down that road.
I think the option to use wildcards should definitely be available, and then its up to people how they choose to use it.
Since we're moving forward with these sets in place of tags, the SceneReader will need to create the sets based on tags within the file. @ldmoser is wondering if this mechanism will get overwhelmed by the number of tags (i.e. each location with a shape has the ObjectType tag, does that correspond to one set containing all leaf locations?). How does the complexity of large environments (LinkedScene) impact creating these sets? We can provide an example LinkedScene for performance testing.
I think this is going to be something we need to worry about, yes. I was planning on just not loading the ObjectType tag (at least by default). It should be used to construct a set for lights and one for cameras, but I think we should just ignore the other types. It also has implications for where user tags are placed - it's much better to place them higher up in the hierarchy than at the leaves. Or it could be argued that they shouldn't be stored in the file at all, but introduced by a Set node in Gaffer.
Linked scenes are an interesting case - if instead of a LinkedScene, we actually used Gaffer to instance/duplicate/transform/group/parent subscenes together, we could gain a lot of efficiencies by improving the PathFilter (the underlying data structure that will be used for sets) to allow chunks of the internal tree to be instanced/referenced/shared-between-sets/copied-on-writed.
I think we should assume that users are routinely going to put millions of objects in these sets, and not demand that they use them sparingly. There are legitimate reasons to want to tag vast numbers of leaf nodes - eg windows in a massive city layout, foliage in a forest layout etc. You're not gonna want to group all the foliage under a different hierarchy to the tree trunks, or put the windows under a different hierarchy to the bricks. Node based scene management was introduced to handle enormous scenes, so it doesn't make much sense if one of the core components doesn't scale well.
Also, while certain tasks like generating forests would benefit from Gaffer's proceduralism, I think there are other things which it makes more sense to assemble in maya at the moment, and I don't want to be forced away from our stable LinkedScene based pipeline just yet...
I think David is correct - it's going to be tricky to always have one single hierarchy arrangement to cover every one's needs. Anything that could be done to help users work round that seems worthwhile. On 1 May 2014 18:58, "David Minor" notifications@github.com wrote:
I think we should assume that users are routinely going to put millions of objects in these sets, and not demand that they use them sparingly. There are legitimate reasons to want to tag vast numbers of leaf nodes - eg windows in a massive city layout, foliage in a forest layout etc. You're not gonna want to group all the foliage under a different hierarchy to the tree trunks, or put the windows under a different hierarchy to the bricks. Node based scene management was introduced to handle enormous scenes, so it doesn't make much sense if one of the core components doesn't scale well.
Also, while certain tasks like generating forests would benefit from Gaffer's proceduralism, I think there are other things which it makes more sense to assemble in maya at the moment, and I don't want to be forced away from our stable LinkedScene based pipeline just yet...
— Reply to this email directly or view it on GitHubhttps://github.com/ImageEngine/gaffer/issues/92#issuecomment-41936310 .
If we must stick with LinkedScenes then it seems pretty doable to build the set in such a way that it mirrors the LinkedScene structure - so that we're not duplicating the same substructure over and over, but instead instancing sets from each link into a larger set. That would use the same low level stuff as things like Group nodes would use to build a new prefixed set from its input sets. Maybe we should start with some of our biggest assets from current shows and do a bit of analysis of the cost of loading sets upfront? If you could send me some appropriate ones I'd be happy to take a look - if it's prohibitive then I guess we'll need to look at some alternative mechanism (although since these sets were originally requested for another purpose, I think they'll still have their own uses, and at least be a nice generalisation of the global light list).
I did think though that we had discussed that we would need to use some explicit instancing in order to enable high level instancing in 3delight - are you now changing your mind on that?
Yeah, was gonna say you could probably use the same tricks on the linked scene and the groups etc... Also, maybe if building these sets when you load a scene cache proves too slow, you could start explicitly writing them into the scene caches (or at least include some up front declarations that speed the process up). If you did that and got rid of the tags then it probably wouldn't hurt the file sizes. I guess you could even change the mechanism for storing tags so it's a bit more explicit and leave the API the same, but also make easy to get to the object set representation? Thinking out loud...
Anyway, if you want a linked scene, this one's pretty big:
/data/jobs/FSQ/sequences/SC/.jabuka/sequenceLayout/masterLayout0300s/versions/0013/sceneCache/sceneCache.lscc
Another thing I've gotta do of course, is make sure I can implement these things efficiently in the MayaScene...
With regards to explicit instancing in procedural layouts, I didn't object to it because I felt the situations that would benefit from it the most would also benefit from proceduralism (eg forests). I don't think it's an appropriate tool for all our scene assembly needs though - eg Edmond's currently pretty happy just bashing together city layouts in maya and using the graph to manage shader assignments etc. I think using the graph for the layout stuff he's doing would get a bit awkward.
As an aside, it'd be possible to turn a layout from maya into an auto generated gaffer network at publish time wouldn't it - maybe it'd be interesting to experiment with that sometime.
Sounds good - I was musing along similar lines with storing sets/tags as a single chunk in .scc files to avoid the traversal associated with recovering tags in their present form.
Good point about the hand placed layouts too - that will definitely have to stay in a world with manipulators for a good while. It would be interesting to explore publishing that as a graph though, especially if it provides potential for optimisation (putting in high level instances where needed perhaps).
Seems like the thing to do is to start playing with some real data and getting an idea of the performance we can expect. Thanks for the link to the scene - I'll let you know how I get on...
Here's a quick progress update :
Which brings us to a question - how to implement a SetFilter? This would be the first filter type in Gaffer which requires access to the input scene - the PathFilter just works using paths which are already present in the Context, and the UnionFilter just uses other Filters as input. David has already implemented filters with scene access internally at Image Engine, but in a more constrained scenario where he's able to provide the input scene via a side channel - this won't work in Gaffer itself.
The most obvious solution is to put an input ScenePlug on the SetFilter. But that would require that the user plugs the scene into it before things would work. This might get annoying. It also doesn't quite make sense if the filter is then applied to two different streams, where one stream has a different source scene than the other.
Although there might be scenarios where you'd like to filter one scene based on queries in another, they don't seem common enough to justify this weirdness, and what about the cases where you want to share a SetFilter between two streams, using the appropriate stream for the filter query in each case?
For the single stream case, we could automatically connect the input scene up when the user connects the filter. And we could avoid the "two streams" issue simply by preventing the filter being used in two places, or recommending against it. We even have a ticket #61 which requests that we hide Filters from the graph anyway, and always parent them under the node they're used on, which would preclude sharing between streams and make the auto-connection trivial. But I have a feeling people quite like sharing between streams, and for more complex filters like the UnionFilter and David's MatchUnrelated, it's quite natural to see the filters as graph items.
The other option would be a little funky behind the scenes - place the input ScenePlug in the Context when querying the filter result. Then each stream would be queried with the appropriate input scene and users would get the full sharing they're used to.
This is the option I intend to start exploring today, and the one David and I were tending towards when we spoke about this a while ago. It's more flexible, but stretches the use of the Context in ways I'm a little uneasy about. So now is a good time to shout up if you feel strongly that filters should be banished from the graph and sharing was always inherently evil - because if that's the case then we can go for a much simpler implementation and I can be more easy.
P.S. Since the meat of the user input for a SetFilter is actually done in the Set node that created the set, you could argue that sharing vs not sharing is a bit irrelevant - the settings on the SetFilter node are trivial enough to just not care and duplicate the filter. But the mechanism we'll arrive at here opens the way to AttributeFilters and BoundFilters and FrustumFilters and so on, so it's an important one to get right.
Quick opinion poll at IE shows that most power users prefer filters to exist in the graph, for reuse, but also for clarity. If they were removed from the graph, then nodes with a filter applied would have to visually indicate that in the graph in some way. The reuse cases seemed legitimate though, so maybe we best leave them there.
There was one suggestion to hide filters created directly from the Filter tab, have nodes like transform start with a default * PathFilter, but then also let a plugged in filter override that for the more complex scenarios. I guess that suggestion would require the visual clarification for hidden filters as well.
Long term, it seems like there are a bunch of clever UI things it would be nice to do with filters, which would make them more visually distinct, and having them hidden by default for simple cases maybe should be part of that.
But it does sound like people like the ability to share them enough that you're probably right to try and get the hard approach working.
Cool. I shall continue with the harder approach.
Hi, Could we consider some inheritance system, where one project Lead could set up a bunch of premade filters into a separate $project_globalFilters.gfr that would be sourced in the dependent scenes opened by the other users. Those global filters presets could show up in the "add..." dropdown menu. The right-side plug circle could change color to a bright constrasted color which quickly indicates the node is being filtered. Custom shot-level filtering could then be done by using the traditional nodes, and be plugged-in the hard way.
As a user, most of the filters I want to reuse are show/sequence generic and applicable to multiple shots. I have to recreate them every time, and email gaffer code to colleagues to spread them out across similar shots. I'm positive a sequence lead would benefit from having input on inherited filters distributed automatically to all his artists, a little bit like setting show render passes. Those sourced filters would be totally fine not showing up as hard visible nodes if there was a visual cue that a node is being filtered.
That sounds entirely reasonable. Are these filters you want to share typically PathFilters or more often a little network of the IE custom filters? If it's PathFilters then this Sets ticket already does pretty much exactly what you want...
But if you actually want to share graphs of Filter nodes (rather than sets), then this ticket isn't about that at all. @thiasbxl actually had a related request when building shader networks - he was finding that he had a number of texture nodes which were common to a lot of his networks and wanted a quick way of plugging them in at various locations in his graph, without having to navigate a large graph to find them, and without having connections sprawled across his graph once the connection was made.
I wonder if it would be possible to address both requirements with some sort of "Bookmarks" or "Shortcuts" concept. You would take all the useful nodes you wanted, and either plug them in to a Shortcuts node, or place them in a special box. They would then become available directly in menus for all compatible plugs in the NodeEditor and NodeGraph. So you could put together a little collection of these useful nodes, publish them, and then have quick access for plugging them into various parts of the graph. Does that sound like it might do what you want?
Hmm... People often complain about shader assignments hitting every node in the hierarchy by default (for example). Sounds like this kind of system could be adapted for adding default "/*" filters to nodes like this.
Is there a ticket for this "shortcuts" thing? This is getting off topic innit
@johnhaddon : on the shading network example, it sounds like a reference/clone node system approach based on same-level existing nodes. Which I totally +1 on, having done more complex lookdev recently, being able to reference a node without dragging a giant link through the board would be a great ergonomic gain.
The bookmarks idea sounds like something that would definitely be useful. It reminds me of the versatility of Nuke's scriptlets. I guess the approach I was thinking of, was something more transparent that wasn't referring to artists creating physical nodes through additional bookmarked networks in their scenes. For those cases, I believe we can already create various gaffer boxes that we can list/get/import through our asset management system.
I was also with the idea that "sets" were something more rigid and asset-based, being published and carried through an asset. I was concerned about the need to ask upstream dept for asset republishing everytime we want a set update. Talking with David, I understand this can be implemented and managed in a more external way too.
I understand now that I was referring more to something like predefined pathfilters, which eventually would be using selection sets as described in this ticket. Getting off topic I reckon, I believe David created a more suitable ticket for the matter I'm describing.
Thanks, LD