Closed bdcline closed 7 years ago
Hello Bryan,
If I am reading this correctly, we are looking at our (currently missing) support for Z-Brush's (mode "3") smooth UV mode in the EvalLimit module ? (for say... generating hair follicles on displacement-mapped character scalps ?).
As you very astutely noticed from #176, this feature is not included yet. This is mostly due to the fact that Eval is built on top of the Feature Adaptive analysis, which cannot rely on PRman's code to pre-compute the face-varying data. I have been dreaming up a few ideas on how to leverage our patch system to implement this, but pushing the face-varying data to the limit will require modifying the feature-adaptive algorithm along with writing all the matching boundary compute-kernels.
Falling back on to a discrete approach is of course a possibility with raw Hbr. The problems here are going to be managing memory and the difficulty of threading the computations. The Eval module was originally designed as a replacement for our legacy code that generates grids of discrete samples, which definitely suffers from these performance limitations. Support for bi-cubic face-varying interpolation was not a requirement for a first pass implementation, although it is in the roadmap.
I am very tempted to suggest that if there is any chance of converting those textures to Ptex, it would simplify everything quite a bit, especially a lot of the partial derivation work... As a side question : I would love to hear more about the practical use-cases where smooth UV are able to resolve artefacts better than the bi-linear mode (aka mode "0"). I have been investigating this a little recently, but wasn't able to come up with anything solid within the studio.
Hi Manuel,
You are correct, the issue I ran into was missing support for Z-Brush (mode 3) smooth UV in the EvalLimit module. Or more generally, I was also trying to get a sense of generic face varying support and, if on the roadmap, how far out you think that would be... e.g. if it was a long way off - if you had any recommendations on how to work around it.
This application however doesn't pertain to fur/hair - although that is definitely another domain I would like to consider using opensubdiv.
A few options seem to be: 1) Don't worry about it (if support was just around the corner) 2) Use Eval Limit for P and dP partials - but fall back to using HBR / similar methods for ST and dST partials 3) Use an alternative API call I might not be aware of.
Another option I'm kicking around it not evaluating the limit surface directly - but adaptively refining (HBR?) the mesh based on a metric that includes a variety of aspects: P curvature, ST curvature, etc. Additionally, I was hoping that the ST partials might be available via an API call - to simplify the code and not fall back on discrete differentials - which currently have artifacts immediately adjacent to EV's.
I'm curious about an aspect in your post: "The Eval module was originally designed as a replacement for our legacy code that generates grids of discrete samples, which definitely suffers from these performance limitations." As you mention 'legacy code', is there a preferred way of evaluating the surface. In this particular situation - we're also evaluating a grid of samples.
On the PTEX side: PTEX is a great method that solves a whole class of issues. Unfortunately, it doesn't fit well in this circumstance for a couple reasons: workflow / pipeline, and issues with filtering across local coordinate systems.
On a final note, the question of our use for Smooth UV's. We want the border to be unchanged / held - and one reason we didn't use bilinear (mode 0) is the PRMan forum post noted that bilinear interpolation can cause texture swimming around EV's. https://renderman.pixar.com/forum/showthread.php?s=&threadid=2592
" The old facevarying scheme (facevaryinginterpolateboundary 0) simply performed bilinear interpolation across each face. As we quickly found out this scheme caused the texture space to swim particularly near regions close to extraordinary vertices "
Unfortunately support for smooth UVs at the limit is a substantial amount of work which has not been prioritized yet, so I probably wouldn't call it "around the corner"...
Depending on the sampling rates, discrete solutions could tide you over, but for high-res textures, I am going to guess that raw uniform subdivision will show its limits rather quickly. The only way that I know of to generate large grids of samples is to break down faces into bi-cubic patches in the way that the feature adaptive code does, then evaluate those patches and fall back on approximations around extraordinary features. This also makes it possible to generate the partials for the face-varying data.
If you look at Hbr, you'll see that the chunk of code dealing with face-varying interpolation is quite large. This evaluation cost is mostly hidden in an off-line rendering app, but becomes a fairly tricky problem for something interactive.
Back to bilinear UVs : was the texel "swimming" something that you actually saw in your renders ? If so : were these particularly high valence vertices, or was it extreme deformation ? I am trying to get a feel for how unstable bilinear UVs are in general and I don't have a lot of data samples to draw experience from... I am also curious about your ptex filtering issues also: local wisdom seems to be saying that ptex filters usually outperform general texture filters...
It definitely seems possible to use the HBR / FAR to generate bi-cubic spline patches and approximate patches around EV's (paying attention to the logic for the different UV modes) - this was one thing that was considered. We will likely hold off for a bit on jumping through this hoop since we already do have a method in place (prman's RixSubdEval)
As for the bilinear UV texture swimming around EV's: I don't recall specific examples that we tested - but remember that taking at face value - jfong's post on the renderman forum noting that they 'quickly noticed' this scheme produced swimming and thus pretty much skipped this mode for consideration - although noting it would have simplified the code / math.
For PTEX: Here's some context - the application is for baking out vector displacement. In addition to workflow considerations (which are significant in themselves), the problem is that PTEX will filter the color into a single color first - which includes across a potential discontinuity on the surface.
Here's the run down: PTEX uses piecewise local bilinear UV coordinates. Across faces - these bilinear coordinates can be very discontinuous. So mathematically what we'd want to do for filtering vector displacement is really of the form:
w = weight VB = vector basis C = color
(w_1 * VB_1 * C_1) + (w_2 * VB_2 * C_2) ...
Where filtering would occur within the separate domains. E.g. Filtering within the domains, apply the operation, and then filter the post-op results.
However, using a PTEX style approach, the following would result:
VB_0 * [ (w_1 * C_1) + (w_2 * C_2) ] or another way:
In this situation - we don't want texture evaluation to cross the piecewise (discontinuous) domains and the vector basis can be very different. Consider the following adjacent faces:
In this example - imagine a simple vector displacement to along the normal and to the right. Both have the same normal component - so we'll concentrate on the other component. The left face will have displacement along the tangent (u direction) while the right face will have displacement along the negative V direction. The colors are baked out in color (tangent, normal, binormal)... here filtering across the faces solely via texture is undesireable.
One possibility would be to adapt a local version of PTEX to provide the separate filtered regions, perform the op, and then recombine the result... but there are also other workflow reasons limit the interest in this approach.
Thank you so much for the lengthy response: this is exactly the kind of information I was looking for.
Bilinear UVs: I think I'll have to poke Julian to hear the full story. I would have expected Mari users to be strongly affected by this... which doesn't seem to be the case, so I'll have to dig some more.
PTex: it's been a while since I have done any vector displacement, and it seems that I have been extremely lucky to have assets where the vectors must have been very close from the limit normals, or I would have seen lots of artefacts. I think there may be an easy fix to the problem you are describing though, and i'll be getting in touch with Brent to see what can be done.
I may not have a solution out of the box for you just yet, but your information is most helpful.
This should now be fully addressed in v3_2_0 including:
Hello,
I've recently picked up OpenSubdiv and am ramping up on the API/library. As an experiment and to better familiarize myself with the code I'm attempting to replace a back end evaluation of a CatmallClark subd surface with OpenSubdiv for an exiting app.
One item that became a bit of a speed bump is FaceVaring data, specifically ST / textures. Ideally this application requires: 1) FaceVarying ST's - including evaluation via the prman facevaryinginterpolateboundary modes. We use mode 3 by defualt. 2) Partials of the ST's; with respect to parametric UV is fine since we can use dPduv partials to calculate dPdst 3) Ideally evaluating the limit surface. (Currently using: OsdCpuEvalLimitController::EvalLimitSample)
Is there a recommendation on how to procede with evaluating face varying ST's?
It seems HBR and FAR support face varying data - however, OSD evaluation is not as clear, especially it looks like the different prman fvar interpolation modes aren't currently supported (Issue #176) . Another way would be to pass through the face id & parametric coordinates through OSD and try to use HBR/FAR to evaluate the ST's on the other side.
Additionally - would there be a way to evaluate d[ST]/d[uv], similar to the call: m_osdEvalCtx->BindVertexBuffers(idesc, m_vertexData, odesc, m_Q, m_dQdu, m_dQdv)? Or would falling back to a discrete sample-based approach be required? Currently I'm using a discrete method - but ideally we'd like to move away from that given issues at Extraordinary Vertices.
Thanks, Bryan
P.S. Overall, very excited with what's in OpenSubdiv - Thanks for opensourcing it!