GafferHQ / gaffer

Gaffer is a node-based application for lookdev, lighting and automation
http://www.gafferhq.org
BSD 3-Clause "New" or "Revised" License
951 stars 205 forks source link

Velocity based motion blur #905

Open davidsminor opened 10 years ago

davidsminor commented 10 years ago

Gaffer needs a system for defining velocity based motion blur. In the system we used for FSQ, we had a string attribute called "user:ieVelocityBlurVarName" which pointed to a vertex primvar (usually called "v"). Maybe something like that?

johnhaddon commented 10 years ago

This is an interesting one I think, and worth some further discussion. I can see how it would be fairly straightforward to look for such a primvar in the procedural, and monkey with the object to add motion just prior to giving it to the renderer (I presume that's what you had in mind?). But I'm not sure that's the way to go, because there are two features of Gaffer which I think are at odds with this approach.

Firstly, time is continuous and motion is also expected to be continuous (rather than consisting of discrete samples that must be interpolated outside the system). Nodes expect to be able to pull on their inputs at any given point in time (including subframes) and be given the right result, no extra work required - this is achieved in the SceneReader and AlembicSource by automatically interpolating the samples they have during compute. With the "just in time" creation of motion from velocity in the procedural, this is no longer true - some objects will move in discrete jumps.

Secondly, Gaffer is a procedural system, where you might insert a node at any point to perform any modification to any object. If we want to modify P, but P is subject to manipulation by the procedural at the end of the chain, we're operating on subtly inaccurate data in the middle of the chain. Any manipulation of primitive variables based on position will be wrong.

Additionally, I'm not sure how the velocity scheme extends to subframe sampling - would we need another attribute to tell the procedural what intervals in time it should expect the discrete jumps to be at, so it can sample only at those exact points? It seems like we might end up extending the scheme to support multiple primitive variables to define curved motion too doesn't it, further complicating things.

I'd like to suggest two alternative solutions :

  1. The interpolation using velocity is applied automatically by the SceneReader.
  2. The interpolation using velocity is applied explicitly using a VelocityInterpolate node, typically applied just after the SceneReader.

I think I prefer 2). It keeps complexity out of the SceneReader, and allows more flexibility for different interpolation schemes in the future without complicating other parts of the system at all. It's very much analogous to retiming sequences optical flow in comp, where the input is always a discrete bunch of frames, and a node is used to explicitly generate the images between frames.

What do you think? Having written this I feel pretty strongly that 2) is the way to go...

bentoogood commented 10 years ago

Quick question raised by this:

Does the SceneReader currently automatically interpolate discretely sampled caches where the point number & order is consistent? Are there any controls for the interpolation (i.e. linear/cubic) ? If that is the case, would it make sense for VelocityInterpolate to just be an option on the SceneReader itself? Or alternatively would it add flexibility to have a TimeSampleInterpolate node for the non-velocity based case?

On Fri, Jul 11, 2014 at 8:16 AM, John Haddon notifications@github.com wrote:

This is an interesting one I think, and worth some further discussion. I can see how it would be fairly straightforward to look for such a primvar in the procedural, and monkey with the object to add motion just prior to giving it to the renderer (I presume that's what you had in mind?). But I'm not sure that's the way to go, because there are two features of Gaffer which I think are at odds with this approach.

Firstly, time is continuous and motion is also expected to be continuous (rather than consisting of discrete samples that must be interpolated outside the system). Nodes expect to be able to pull on their inputs at any given point in time (including subframes) and be given the right result, no extra work required - this is achieved in the SceneReader and AlembicSource by automatically interpolating the samples they have during compute. With the "just in time" creation of motion from velocity in the procedural, this is no longer true - some objects will move in discrete jumps.

Secondly, Gaffer is a procedural system, where you might insert a node at any point to perform any modification to any object. If we want to modify P, but P is subject to manipulation by the procedural at the end of the chain, we're operating on subtly inaccurate data in the middle of the chain. Any manipulation of primitive variables based on position will be wrong.

Additionally, I'm not sure how the velocity scheme extends to subframe sampling - would we need another attribute to tell the procedural what intervals in time it should expect the discrete jumps to be at, so it can sample only at those exact points? It seems like we might end up extending the scheme to support multiple primitive variables to define curved motion too doesn't it, further complicating things.

I'd like to suggest two alternative solutions :

  1. The interpolation using velocity is applied automatically by the SceneReader.
  2. The interpolation using velocity is applied explicitly using a VelocityInterpolate node, typically applied just after the SceneReader.

I think I prefer 2). It keeps complexity out of the SceneReader, and allows more flexibility for different interpolation schemes in the future without complicating other parts of the system at all. It's very much analogous to retiming sequences optical flow in comp, where the input is always a discrete bunch of frames, and a node is used to explicitly generate the images between frames.

What do you think? Having written this I feel pretty strongly that 2) is the way to go...

— Reply to this email directly or view it on GitHub https://github.com/ImageEngine/gaffer/issues/905#issuecomment-48701032.

ben tooogood vimeo https://vimeo.com/user9291410 | linkedin https://www.linkedin.com/in/bentoogood

johnhaddon commented 10 years ago

Good questions.

Technically, the SceneReader doesn't automatically interpolate anything - behind the scenes it defers to Cortex SceneInterfaces, and they are responsible for responding to queries at any point in time. In practice yes, it automatically interpolates caches where the topology is consistent, because that's what the Cortex SceneCache does behind the scenes. The SceneCache only supports linear interpolation, but I've yet to use anything other than linear in production, despite our older animation format supporting cubic. Any time we needed better subframe motion, we needed more samples to achieve it rather than different interpolation. As I understand it, if the mesh topologies don't line up between samples, then no automatic interpolation will be done, which is where this issue arises.

I can see the argument for moving all the interpolation outside the SceneReader, or moving the velocity support inside it, but I'm going to suggest that we don't. By far the most common case for the SceneReader is consistent topology and automatic interpolation being sufficient - so I think that should just work out-of-the-box, as is currently the case. Changing topologies are the less common case, where more thought/setup is always going to be required. So I think requiring the extra node and providing the extra control in just this one case is the most appropriate thing to do. If at some point we have a specific need, we could add a "Disable Interpolation" checkbox on the SceneReader to allow funky custom interpolations to be done later in the graph even when the topologies are consistent - but we should wait to see if such a need arises.

On a side note, there are actually several continuous-time (non-sampled) SceneInterface implementations in Cortex, one which reads from a live Houdini scene and one which reads from a live Maya scene. David is currently doing some jolly interesting things with them...

carstenkolve commented 10 years ago

I guess the main application would be fx elements (like pointclouds used as templates for rendertime geo instantiation etc.) . As this kind of data can get quite big and can drastically change from sample to sample - what are the requirements for the scene reader to be able to track consistent parts of the topology for interpolation purposes? Is an "id" primvar sufficient? At what point is the uniqueness of an "id" at a specific sample time guaranteed, can id values be reused over time etc ? does it have an impact on how the data is sorted?

On Fri, Jul 11, 2014 at 2:10 AM, John Haddon notifications@github.com wrote:

Good questions.

Technically, the SceneReader doesn't automatically interpolate anything - behind the scenes it defers to Cortex SceneInterfaces, and they are responsible for responding to queries at any point in time. In practice yes, it automatically interpolates caches where the topology is consistent, because that's what the Cortex SceneCache does behind the scenes. The SceneCache only supports linear interpolation, but I've yet to use anything other than linear in production, despite our older animation format supporting cubic. Any time we needed better subframe motion, we needed more samples to achieve it rather than different interpolation. As I understand it, if the mesh topologies don't line up between samples, then no automatic interpolation will be done, which is where this issue arises.

I can see the argument for moving all the interpolation outside the SceneReader, or moving the velocity support inside it, but I'm going to suggest that we don't. By far the most common case for the SceneReader is consistent topology and automatic interpolation being sufficient - so I think that should just work out-of-the-box, as is currently the case. Changing topologies are the less common case, where more thought/setup is always going to be required. So I think requiring the extra node and providing the extra control in just this one case is the most appropriate thing to do. If at some point we have a specific need, we could add a "Disable Interpolation" checkbox on the SceneReader to allow funky custom interpolations to be done later in the graph even when the topologies are consistent - but we should wait to see if such a need arises.

On a side note, there are actually several continuous-time (non-sampled) SceneInterface implementations in Cortex, one which reads from a live Houdini scene and one which reads from a live Maya scene. David is currently doing some jolly interesting things with them...

— Reply to this email directly or view it on GitHub https://github.com/ImageEngine/gaffer/issues/905#issuecomment-48709641.

// carsten kolve - www.kolve.com

carstenkolve commented 9 years ago

@davidsminor ; afaik this has been implemented? can this ticket be closed?

davidsminor commented 9 years ago

I've done an internal IE tool for this, but it ain't in gaffer at the moment