currently, we calculate a bbox for all the input polygons. If the polygons happen to be on opposite sides of the anti-meridian, then we will sample pixel values across the whole world.
this could become a problem if the raster is high resolution and all the values can't fit into memory or it takes a long time to decode all the pixel values
But we can't treat each polygon as separate because we need to account for any overlap between polygons
some options:
merge polygons and group by those that overlap; then run stats on each polygon, collect intersecting pixel values, then run calc-stats on those
calculate bounding boxes for every polygon, merge/reduce those bounding boxes, if percentage of subbbox pixels in total bbox is less than 25%, then treat each bbox separately, sample for each distinct bbox, still run intersections on whole, when pulling intersected values, use offset of each bbox source; this'll add another step to pulling values from intersections, but it'll be a lot more memory safe
currently, we calculate a bbox for all the input polygons. If the polygons happen to be on opposite sides of the anti-meridian, then we will sample pixel values across the whole world.
this could become a problem if the raster is high resolution and all the values can't fit into memory or it takes a long time to decode all the pixel values
But we can't treat each polygon as separate because we need to account for any overlap between polygons
some options: