Open teh-cmc opened 1 year ago
The issue as written seems to be vaguely touch three different things:
A) There is sometimes a desire to somehow specify what the unit of measurement is per “space”. Is 1
a meter, a lightyear, or a pixel? This doesn’t affect anything except, perhaps, how we present distances in the GUI.
B) There is the problem of specifying scale in transforms. For instance: transforming a value in a depth-map tensor to a distance in the parent space.
C) It is sometimes useful to be able to say “this is the rough scale of things in this space”. Two users can both be using mm
as their base unit, but one is 3D scanning whole cities, and the other is 3D scanning jewelry. This “general scale of things” can often be estimated from the actual logged data.
I had some thoughts on this back when we were introducing view-coordinates (probably in a notion or linear comment?). I'll have to see if I can find them and recopy them here for posterity.
In short, I think there is an opportunity to extend the view coordinate concept to include a notion of "unit". The idea being that any data logged to that space is assumed to have that unit. This is an important differentiation from trying to specify the unit on the data itself. Data is logged without units -- view coordinates tell you what that data means physically.
Additionally, transforms should be also be (at least optionally) unit-aware. When unit-analysis of your transforms doesn't match the specified units of your view coordinates, you should get a warning. However, the math that happens is just the math defined by the transform data. It's the responsibility of the user (or the helper function they use), to ensure that the math reflects the annotated input/output units.
I'll also note that in my opinion, we should avoid doing any "magic" conversions based on these units. It might be tempting to allow users to specify a data unit and automatically apply scaling if the view coordinate unit doesn't match, and I think this will lead to no end up confusion and pain. If you want to convert units, use a transform.
In this way, when we create a view from a space, the units are defined by the view coordinates, and the transform system guarantees that all data transformed into that space will have the same units.
Another thing: I want to be able to configure my camera speed as e.g. 10 m/s
, which would just do the right thing based on how big I've defined a world unit to be in my current view.
Consider this:
MetricScale(1.0)
and MetricScale(0.001)
as components on sponza and the cup of coffe, respectively1 world unit = MetricScale(1.0)
(i.e. 1 unit = 1m)cam_speed = 10 m/s
(i.e. 10 world units per sec)
Had a long discussion with @Wumpf about all things size & scale and one of the biggest issue right now is that for most things we render, we don't have any notion of what their actual size is in terms of the metric system, and even then we have no way of specifying how a metric unit actually maps to a world-space unit in the scene (e.g. 1 world-space unit = 1mm).
Obviously this becomes a real issue as soon as there's more than one thing to render in a scene :shrug:.
Some (few?) of our primitives actually already offer a way to specify their metric scale: e.g. users can specify a metric scale when logging depth maps. Unfortunately, when projecting said depth maps into point clouds in a scene, there's no way of telling how that size relates to the rest of the scene.
We should have a way of specifying a metric scale for most things, and then have a way of specifying the correspondence from metric units to world-space units when setting up a view.
Somewhat related to #1219