Open mourner opened 6 years ago
I like this idea!
We could also only limit this calculation to moveend rather than do it on every frame, and animate the layer intensity from the old value to the new one after the user moved the map.
We could also just throttle it (once we know how much real time calculating the max takes) -- that would allow the intensity to update while a user is dragging/pausing/dragging/etc. for a long time
Finally, by introducing these improvements, the heatmap layer implementation could get too complex to maintain productively. Is there any way we could reduce this complexity?
This doesn't sound to me like it would add an unmanageable amount of complexity -- we're mainly talking about an isolated routine that runs at the end of the 'offscreen' phase of drawHeatmap
, right?
@anandthakker the complexity could also stem from the need for a complex workaround in the non-half-float case, although I don't yet have a good idea of how it would look.
Awesome proposal @mourner. This default functionality would benefit data visualization use cases - currently, the heatmap-intensity
setting requires manual changes in order to show the extent of heatmap-color range when updating a source with new data. Removing the need for a manual update of heatmap-intensity based on data density would be a better user (and developer) experience.
Interesting
The adaptive heatmaps concept is inspired by the current heatmaps feature in Snapchat, which adjusts to the screen as you pan — users love it and want an option for Mapbox GL heatmaps to work in a similar way.
Does it adjust during panning as well? Or just during zooming?
The most promising approach I found is described in this StackOverflow answer — basically, the idea is to progressively reduce the density texture with a "max of 4 pixels" shader in a mipmap fashion until we're left with a 1x1 texture with a maximum value.
Could a single pass that loops over all pixels be possible?
Finally, by introducing these improvements, the heatmap layer implementation could get too complex to maintain productively. Is there any way we could reduce this complexity?
Is there a way to get an estimated max density for a tile that could be calculated once on tile load? How exact does this scaling need to be?
Mock-Up
The proposals look good, but what about handling this with an expression? "heatmap-intensity": ["heatmap-max-density"]
which would let you do something like "heatmap-intensity": ["max", 8, ["heatmap-max-density"]]
some other things that came to mind: Would this result in sudden changes when a dense area gets moved offscreen? Could this behaviour be weird in really sparse areas?
Does it adjust during panning as well? Or just during zooming?
During panning too!
Could a single pass that loops over all pixels be possible?
Possible on the CPU after we do readPixels()
of a rendered density texture. Without that, we would have to loop over all pixels and for each pixel, find a sum of contributed densities from all nearby points, which sounds very expensive, and complicated since you'd have to index the points too.
Is there a way to get an estimated max density for a tile that could be calculated once on tile load? How exact does this scaling need to be?
No need to be too exact I guess, but I don't have a good approach for doing the per-tile calculation. 1) It's might be expensive on the CPU, 2) we'd have to keep track of max values for each tile and use the max one. 3) it could feel weird since density would change depending on visible tiles, not visible data points.
what about handling this with an expression?
This might be a good idea, if it's not too hard to implement.
Could a single pass that loops over all pixels be possible?
Possible on the CPU after we do readPixels() ...
What about with a loop in a fragment shader?
@ansis hmm, I thought that was really slow since it would make the fragment shader unparallelizable — basically blocking on every pixel read until we're done with the whole texture. I'm not sure though — what do you think?
@mourner yep, I think you're right that it wouldn't be parallelized
I'll sketch the mipmap reduction approach to see how fast that would run, but probably fast enough.
What I'm still worried about is dealing with unsigned_byte ("fallback") heatmap rendering mode. The only idea I have in mind so far is repeatedly halving the intensity until we're no longer capped — redrawing the texture with half the intensity if the max density is 1.0, getting the max again and repeating until it falls below 1.0. Not sure how well it would work in practice, but probably enough for a fallback.
I came to this issue from #6463 and is very interested to know if there has been any progress on this?
Does anyone have a workaround to get close to this functionality?
Motivation
Currently, one of the most difficult things you face when working with GL heatmaps is adjusting
heatmap-intensity
so that it would look good on any zoom level — this heavily depends on the data. Also, it's currently impossible to adjust intensity based on what data gets into the viewport — after the user panned away from the most intense spots, the coloring won't be as useful to judge data patterns.Design Alternatives
Capturing a few user suggestions, let's explore how we could implement screen-adaptive intensity — enabling a heatmap that automatically adjusts to the data you see on the screen. This would not only eliminate the need to hand-pick the intensity expression, a common pain point, but also enable optimal rendering regardless of where the user pans.
It's not an essential feature, so we could do away with it, but implementing it would make heatmaps more flexible and more enjoyable to use in many use cases.
Design
The feature should be an option, maybe as an additional property, or a special string/number value for
heatmap-intensity
. After you enable it, you no longer have to worry aboutheatmap-intensity
— it will automatically adjust to the maximum density point on the screen so that all densities are exactly in 0..1 range.If it works well, it may be worth having it on by default.
Mock-Up
Concepts
The adaptive heatmaps concept is inspired by the current heatmaps feature in Snapchat, which adjusts to the screen as you pan — users love it and want an option for Mapbox GL heatmaps to work in a similar way.
The feature will make it easier for users to adopt heatmaps because they will have one less configuration option to worry about to get a good looking map.
Implementation
One technical challenge is finding a fast way to determine the maximum density value in a heatmap texture so that we could readjust densities. Doing
readPixels
to find the maximum on the CPU side is too expensive, and so is calculating it on the CPU from data points since we have to do it for each pixel on the screen.The most promising approach I found is described in this StackOverflow answer — basically, the idea is to progressively reduce the density texture with a "max of 4 pixels" shader in a mipmap fashion until we're left with a 1x1 texture with a maximum value. This will require
log(width)
passes of rendering progressively smaller textures, which should be relatively fast. We could also only limit this calculation tomoveend
rather than do it on every frame, and animate the layer intensity from the old value to the new one after the user moved the map.Another tricky challenge I don't yet see a solution to is that this approach will break on the UNSIGNED_BYTE "fallback" version of the heatmap — since densities will accumulate in 0-255 range bytes, they'll be hard-capped, so getting the max value won't be useful for readjustment. Is there any way around this?
Finally, by introducing these improvements, the heatmap layer implementation could get too complex to maintain productively. Is there any way we could reduce this complexity?
cc @anandthakker @ryanbaumann