Open theprojectsomething opened 3 years ago
Thanks @theprojectsomething, that definitely seems like a valid extension of the exaggeration
functionality. I do remember a similar ask to expose the minimum and maximum elevation bounds of the current view in order to increase the elevation when these bounds are known to be too low.
Technically, it is a bit different from fill-extrusion. To allow input z value, we would probably have to be a bit stricter on what type of expressions are allowed when we use z as input.
For context, we currently apply the exaggeration in a shader program on the GPU after sampling the DEM. With the way our terrain rendering works, it's only at this time of the processing pipeline that a z value for a particular point is known and multiplied by the exaggeration.
When doing that globally, e.g. one exaggeration for all points, the expression can be evaluated on the CPU and passed as a uniform (global value for vertices). But if we move to that level of granularity and allow input z value, the exaggeration expression would probably have to be evaluated in a shader. My initial thoughts for a path to implementation would be to create a 2d texture representing this expression as a gradient, and let the hardware interpolation do the rest of work. A possible pipeline:
float z = mix(mix(tl, tr, f.x), mix(bl, br, f.x), f.y);
float exaggerated_z = texture2D(u_exaggeration_gradient, toExaggerationUV(z)).a * z;
return exaggerated_z;
Thanks for the consideration @karimnaaji sounds like a solid solution!
For more complex expressions ... short of generating the shader on the fly, would you provide a second factor? So one data-driven (i.e. z input, using your reference texture) plus a uniform multiplier that evaluated separately on the CPU to take in map state, e.g. zoom, bearing, pitch, etc? To save regenerating reference textures. Excuse the pseudo code ...
uniform float unit_state = 1.0;
float z = mix(mix(tl, tr, f.x), mix(bl, br, f.x), f.y);
float exaggerated_z = texture2D(u_exaggeration_gradient, toExaggerationUV(z)).a * z * unit_state;
return exaggerated_z;
Yes! But we already provide zoom
as input for this sort of expression, refer this example.
z
would be a new input and like I mentioned, slightly more challenging as it can't be evaluated globally.
Another fit for exaggeration that is very similar to zoom
is pitch
like you suggested. It's very similar to zoom
as it's evaluated once, and would work exactly like zoom
internally. bearing
on the other hand immediately feels less useful as an input there, as this degree of freedom in the camera does not strongly call for any visual change of what you're looking at, compared to top down vs pitched views, where increasing the 3d aspect of the map in pitched views only is more natural.
Yes absolutely, bearing is unlikely to be useful! I think my point, made terribly, was just considering how you might evaluate more complex expressions ... given that texture lookup becomes somewhat more complex once there are more than a few dimensions 🙂
Given zoom and pitch is all you need, a smallish (100x100) texture would easily suffice ... with a narrowish colour channel for elevation lookup. Which is exactly what you said in your first comment!
I'm concerned maybe exaggeration
might be too baked into the 3d structure of the map to modify it nonlinearly. For example this line, which includes the exaggeration scale factor as part of raycasting the terrain:
Since we raycast terrain in 3d space and then work back to the original elevation, we also need an inverse mapping, which we could compute numerically but which we would not, in general, have for a given custom mapping.
I wonder if this is better thought of as a separate preprocessing step in which you could apply a custom mapping to elevation values during decoding. With a custom build (if this is viable for you, @theprojectsomething, don't hesitate to ask for clarification regarding the build process) it would certainly be possible to insert a custom mapping into the decoding step:
Hi @rreusser thanks for the insight. Yes we can definitely try working with a custom build; we essentially got it working previously by doing this at the dem encoding stage. Makes way more sense to do it in the decoding!
The main concern, I think, is that we are only wanting to apply the exaggeration to the render. So, not only would we want to provide standard encoded dems that rendered with exaggeration, any elevation lookups (e.g. "what's the height at this point?") would expect an un-exaggerated response.
Appreciate the added complexity for raycasting. That said, if it's as simple as providing an alternative unpacking formula .. does the original comment from @karimnaaji not still apply (regards a custom elevation / exaggeration formula, generated on the fly and rendered as a sprite for the shader)? Assuming you have access to x/y in the inverse scenario.
I would add to that argument that there's no time like the present! Honestly though, I think there could be a lot of use for this beyond the stated use-case. Scale is a massive issue with planet-level elevation, esp. combined with man made infrastructure (and not to mention the massive incongruity between horizontal and vertical resolution of much DEM data). Data-driven styling is also a USP for Mapbox. Seems like a feature match made in heaven 😅
On a slightly tangential note, having a third mapbox/terrarium/'custom' encoding option where a curve or stops could be provided and converted to an unpacking formula would be equally great. But this raises the question .. apart from being too hard .. why you wouldn't just implement this in the paint style.
@karimnaaji, do you have insight on whether adjusting elevations via a texture would be feasible?
My particular concern is that adjusting elevations in the vertex shader by reading tabulated exaggeration factors should be fairly straightforward, but it's not clear to me how we'd handle this on the CPU side. What would happen to the DEM quadtree and how would we handle raycasting given a nonlinear exaggeration factor?
@karimnaaji, do you have insight on whether adjusting elevations via a texture would be feasible?
If you mean using the gradient texture idea that I mentioned earlier it seems possible to replicate the exact vertex shader evaluation on the CPU as well but it isn't trivial, especially concerning the point on the DEM tree.
We could leverage the fact that it is a minimum DEM tree and is built in a first pass without exaggeration, while exaggeration is applied on it at sampling/raycasting, we could build it using the lower bound of all the possible values of such an exaggeration expression. (e.g., with the original example given above:
exaggeration: [
'interpolate',
['linear'],
['get', 'z'], // based on z value, same as accessing in fill-extrusion
0, 2.5,
100, 1.5,
],
we would build the tree with a fixed exaggeration of 1.5 (min(2.5, 1.5)
), then during raycast and sampling of the tree, we would evaluate ['get', 'z']
non-linearly).
But there might be other reasons why we couldn't replicate all of what the shader does with this approach, so overall I feel like preprocessing or modifying the DEM data in place after unpacking it like @rreusser proposed would be end up being simpler overall, so I +1 on that.
Really appreciate the thought going into this question. Thanks @karimnaaji and @rreusser. Regards the pre/post-load processing of the DEM, the main issues this raises in my head are interoperability (e.g. with other DEMs / elevation data) in that we're moving from real to imaginary elevations, rather than relying on the concept of exaggeration. Maybe I'm missing something here.
That said, if there was a dual encoding, so the DEM was packed using the standard mapbox or terrarium formula, but then a post-processing(?) formula could be applied prior to any scene creation / rendering / raycasting .. and un-applied as part of any call to queryTerrainElevation
that could be workable. It would also allow for "zoom exaggeration" to remain as-is, which could come in handy. Apologies if this is exactly what you were getting at @karimnaaji
As an aside, do you know if zoom-driven exaggeration is a commonly used feature? Is there a reason it was tackled at a raw data level, rather than via camera perspective / fov?
and un-applied as part of any call to queryTerrainElevation that could be workable.
Yes, our API queryTerrainElevation
works like this and allows with an option to query either raw elevation, or the elevation applied with the currently evaluated exaggeration factor.
As an aside, do you know if zoom-driven exaggeration is a commonly used feature? Is there a reason it was tackled at a raw data level, rather than via camera perspective / fov?
The main use case I can think of for that is when elevation is not necessary or even noticeable for a range of zoom levels. For example, below zoom 15 (or less) you may want to zero out terrain, and only add elevation when really close to the ground:
https://user-images.githubusercontent.com/7061573/129745616-e14ed064-e1ed-49fa-a698-12e6a490c4fc.mov
We don't provide access to FOV (yet) but this would provide a slightly different effect compared to exaggeration (it would give terrain a grater sense of natural scale when the FOV is a bit higher).
Yes, our API queryTerrainElevation works like this and allows with an option to query either raw elevation, or the elevation applied with the currently evaluated exaggeration factor.
@karimnaaji apologies if I didn't explain myself properly. From what I understand, in the current situation any pre-processing of the DEM data would have the effect of baking in the exaggerated elevation as the actual elevation. So calls to queryTerrainElevation
would return the output from the pre-processed data and not the true elevation.
Having another quick think about it, I'd probably see it working along the lines of:
['^', ['get', 'z'], 0.9]
to augment the standard DEM. Let's call this "data exaggeration".queryTerrainElevation(lngLat, { exaggerated: false })
retrieves the rendered elevation, reversing any layer exaggeration. Unlike the renderer, the query function is aware of data exaggeration and so also reverses this before returning the true elevation valueOn FOV, thanks that definitely makes sense (very nice video!). I guess I was thinking that camera perspective controls might provide a potential workaround for scale problems (like the one I'm trying to solve). Esp. given separation of concerns between raycasting / exaggeration.
Motivation
I would like to be able to customise exaggeration based on elevation ranges, similar to what is currently possible with fill-extrusion layers (esp. height/colour).
I work with reef bathymetry data (@30-100m x/y resolution) that generally sees depth profiles between 0 to 60m. The reefs often sit on narrow shelfs, bordered by mountains, that quickly drop off to kms depth on the ocean side. Being able to emphasise detail on the reef shelf, while tapering the extremes either side, is quite useful. Stepped/interpolated exaggeration based on the
z
value would allow this.In the past I have encoded DEMs using a custom height profile. This was primarily to (selectively) squeeze as much detail as possible - in the right places - into an 8bit png, which was all that was supported for upload into Studio at the time. Being able to decode the profile effectively on the front-end (using data-driven styles) was vital, but had the added effect of affording range-specific exaggeration.
Here's an example using a fill extrusion layer with data-driven styles for height (and colour): https://lab.citizensgbr.org/census-map/
Design Alternatives
The same can be achieved by custom-encoded DEMs alone, however the exaggeration approach is more flexible (no re-encoding large tilesets to make adjustments) and ensures cross-compatibility (supplied elevation values are correct / per standards).
Design / Mock-Up
Developers / client side implementation would work the same as now, except elevation data would be queryable: