This approach clusters the locations of the cameras to partition the mesh. Then it runs aggregation on each sub-region, which mitigates the issues with memory and runtime caused by trying to do aggregation on the whole mesh. These predictions are then combined into a texture for the full mesh. If the buffer is set sufficiently large such that no camera's predictions don't intersect with the cropped mesh, when they should have intersected with the full mesh, the results will be the same.
This approach clusters the locations of the cameras to partition the mesh. Then it runs aggregation on each sub-region, which mitigates the issues with memory and runtime caused by trying to do aggregation on the whole mesh. These predictions are then combined into a texture for the full mesh. If the buffer is set sufficiently large such that no camera's predictions don't intersect with the cropped mesh, when they should have intersected with the full mesh, the results will be the same.
This is functional but could be tested more.