RenderKit / ospray

An Open, Scalable, Portable, Ray Tracing Based Rendering Engine for High-Fidelity Visualization
http://ospray.org
Apache License 2.0
997 stars 182 forks source link

No way to use TextureVolume on transformed texture? #418

Closed paulmelis closed 4 years ago

paulmelis commented 4 years ago

As using a TextureVolume is the new way of slicing I was wondering if the 2.1 API currently supports slicing only in a fairly limited way? I.e. TextureVolume takes a VolumetricModel reference, meaning it can only access the untransformed original volume extent. So any geometry to be colored by a TextureVolume must be located in the same untransformed volume extent. The docs say [t]he volume texture type implements texture lookups based on 3D world coordinates of the surface hit point on the associated geometry. I read this as the sample position on the transformed geometry (i.e. Instance) is used, and not the untransformed geometry (i.e. Geometry). But that implies slicing geometry placement is limited by the untransformed volume extent and a geometry cannot be moved without influencing the volume-based coloring.

The use case I was thinking of is having multiple copies of the same volume data side-by-side in camera view, but with each volume instance showing a different slicing geometry in its extent (alternative use case: using the same slicing geometry but with different volume datasets side-by-side). But the world-space placing of the slicing geometries cannot be matched with the untransformed volume extent so the use case is currently impossible to realize.

Just curious if my conclusion is correct here?

johguenther commented 4 years ago

We are about to add 3D texture transformations for the volume texture (similar to the existing 2D texture coordinate transforms for texture2d) , for the reason and usecases you described. During implementation we also realized that it is probably better to have the TextureVolume lookups based on local object coordinates (of the geometry it is applied to) instead of world coordinates. Opinions?

paulmelis commented 4 years ago

Ah, 3D texture transforms could indeed solve this, albeit in a bit convoluted way when the slicing geometry has a complex transform. As it looks like the current Texture2D transform support does allow a general 2x2 matrix, but that leaves out the translation part so generally setting the inverted transform of the geometry (in 3D) is not possible if translation can't be specified (yes, as separate parameters, but that forces one to decompose the matrix into those parts).

Having TextureVolume lookups in object-space versus world-space: neither is optimal, I would say. If the lookup is done in object-space that would force you to specify the underlying geometry directly in the right location with respect to the volume extent as transformations aren't available. In world-space at least some transformation on the slice geometry can be done (e.g. orienting a slice plane within the volume), but still locks the geometry placement to the untransformed volume.

Best of both worlds would be to specify the slicing geometry and volume both in world-space to allow maximum freedom. But I'm sure that has both performance and design downsides. I guess being able to set a 3D texture transform on the TextureVolume to transform into volume space with the slicing geometry being freely placeable in world-space is good enough.

johguenther commented 4 years ago

The plan is to have both: lookups in object-space, plus 3D transformations (as 3x4 affine matrix, including translation). This should work nicely as long as the manipulation (like orientation) of the slice geometry is done via vertex position updates (or updating the plane equation) and not via an ospInstance transformation (which then would need to be countered by setting the inverse as texture volume transform).

paulmelis commented 4 years ago

How about a matrix parameter on TextureVolume that specifies how the geometry used for sampling is to be mapped into the volumetric domain? That would also need to updated on each movement of the sampling geometry (or volume when movedin world-space), but if S is the object-to-world transform for the sampling geometry and V object-to-world for the volume you would only need to keep S*(V^-1) set on the texture volume (if my quick math is correct). Having only to update a matrix on TextureVolume is less of a hassle than having to update geometry at the vertex-level on every interaction as that means updating the vertex buffer, updating the normals, updating the Geometry object, etc. Doesn't the latter also involve rebuilding of the BVH and such? Might get expensive for more complex sampling geometry.

johguenther commented 4 years ago

Volume texture look-ups are now in local objects space, and materials have 3D texture transformations.