Open nvmkuruc opened 9 months ago
Hi @nvmkuruc and team. Thanks for the thought provoking questions. I will try to answer these in the matching bullet.
- Users may want to preview, inspect, and/or log the proposed fixes before applying them
Can something like this be achieved by applying one of the fixes on a temporary editTarget, which can then be "previewed" / "finalized" or "discarded". If so, I believe something like this can be achieved as an add-on thing of what the fixer interface will provide with the framework. Though there is some core UsdStage level work needed to support this "temporary editTarget being part of the stage's original layer stack", thing I am proposing!
- Users may want validators to be notice driven (only run material validation on the materials that have been edited)
Yup, this is totally achievable via the current ValidationContext interface, and the ability to run a set of tests on a given set of prims. So a client application, say usdview could use the UsdNotice framework to track materials which have been updated and only run Validators tied to UsdShadeMaterial schema (a ValidationContext instance can be created which has all UsdShadeMaterial validators available).
- Users may want to know if a fix may invalidate previous validations. Consider a validator that repairs broken material bindings. Expensive geometry validators could be asked if their results are affected by changes to material:binding properties and avoid rerunning after a fix was applied. The framework could reward schema and validator developers for separating concerns such that they can be independently validated.
What validators get affected by applying another validator's fix is interesting and can get convoluted very easily I think. A fixer breaking another validator, means either the fixer is incorrect or the validator is testing something incorrectly, at least for the core usd schemas! That being said, I think the proposed concept of an ErrorID can be used to track what validators have been failing between validation runs, and combining this with the usd notice infrastructure discussed above might help achieve this task. I am not sure though if the framework should provide more concepts to handle something like this, if this can be achieved by various core USD components.
- Users may need to validate time samples as a unit, not individual time samples. For example, a hitch in animation may be only observable by looking at three consecutive samples and the proposed fix may only make sense with that context and adjusting in concert.
Very valid point and thanks for the "hitch" example. Totally makes sense, I will see how I can update the APIs to take this into account.
- A validator might want to propose alternative sets of fixes (or sites) for a single error. Many times in OpenUSD, opinions aren’t incorrect, they’re just in conflict and a user needs to pick from a menu of possible repairs. Even the same fix might have multiple viable sites (ie. repair the source layer or override sparsely in the root layer).
I think the proposal already takes this into account by providing multiple named fixers per validator, which the client can use to check (by using CanApplyFix) and apply a specific named fix. Note the edit targets can be appropriately set on the Usd stage before applying the fix. Another example for this is a fix for Model Hierarchy validator, where depending on the context kind="component" or kind="group" can fix the broken model hierarchy.
- There may be cases where fixes need to interact with a backing asset system and do more invasive surgery. As a specific example, converting “heavy” usda files to usdc.
So a site specific validator checks for the extension of a layer / root layer of the stage, to check if it's usda/usdc. And one of the fixers associated with the validator is to give the client an option to convert the layer / stage's root layer appropriately. At the same time the fixer also traverses the layer stack and updates any references (reference/sublayer/etc) to the old "usda" file to the new "usd(c)". (With CanApply checking if the client has the permission to make such an update, write permissions, etc). Am I understanding the question here? If so, shouldn't the above described Validator/Fixer not work in this scenario?
Overall, we’d suggest validators that can communicate fixes declaratively in terms of something akin to SdfChangeList could address several of the above usages. Fixes might be categorizable in three ways. The purely declarative and site declarative fixes would allow for varying levels of introspection and dependency analysis that a validation framework could leverage.
- Purely Declarative: The proposed sites and values for a fix can be communicated entirely declaratively. Users would have complete introspection into what a fix would do.
- Declarative Sites, Procedural Value: The proposed sites can be declaratively communicated, but introspection into the actual value change may be limited due to performance considerations of pre-computing and storing the set of fixes in memory.
- Purely Procedural: The changes may not be efficiently describable in terms of a set of fix sites.
I am not sure I understand these, could you please elaborate? But to what I understand, after discussing the fixer interface extensively internally, we came to a conclusion that a validator should not provide a "decisive" fix, and it should only provide these fixes as a suggestion which the client needs to apply appropriately.
Thanks
By communicating fixes declaratively, we didn't mean that it would automatically or decisively fix it. We just meant some validators could return the suggested fix opinions directly.
For example, consider a validator that was checking that was checking for out of range color values. As I understand the current proposal, the validator could return each site containing an out of range value as a UsdValidatorError
. And a fixer could be then applied which would take the UsdValidationError
(which contains a single site).
It seems like the testing logic has to implemented in two places, in the original validator and then the fixer. The validator has to report that there's an error at /site/abc.inputs:albedo
and then the fixer has to rediscover what that error is to fix it. (There could even be some ambiguity here, with multiple errors at the same site.)
By a declarative fix, we're thinking that a validator could return explicitly-- there's an error at /site/abc.inputs:albedo
and that setting it to (.6, .6, .6)
would fix it. The fixer in this case would be to apply the change list and wouldn't have to be a user supplied implementation. We're not sure that an SdfChangeList
is the exact right data structure to describe what's needed, but it seemed similar enough that we thought it worth suggesting as a starting point for discussion. We thought a list was appropriate because a fix might require edits to multiple sites and values.
When validators can describe fixes as a set of explicit suggestions, it can be easier to figure out whether or not they potentially conflict, might affect each other's results, and present options to the user on how to negotiate the conflicts. We don't think all validators make sense to be structured that way, but that enough (perhaps even most), that it's worth considering.
We've been discussing some potential usages of the framework (#29).
material:binding
properties and avoid rerunning after a fix was applied. The framework could reward schema and validator developers for separating concerns such that they can be independently validated.Overall, we’d suggest validators that can communicate fixes declaratively in terms of something akin to
SdfChangeList
could address several of the above usages. Fixes might be categorizable in three ways. The purely declarative and site declarative fixes would allow for varying levels of introspection and dependency analysis that a validation framework could leverage.Thanks for the consideration!