Currently, in very large scenes, there is a lot of unnecessary compute of correspondence between camera pixels and mesh faces where they don't coincide. To reduce that, we can chunk a scene into smaller regions, which therefore have less extraneous compute, and then merge.
This is already completed for aggregation (#52), but as a separate method. Could integrate into one method, as a user-specified option. Also needs to be implemented for rendering. In future refactors where a centralized pix2face method exists, this could all be implemented there.
Currently, in very large scenes, there is a lot of unnecessary compute of correspondence between camera pixels and mesh faces where they don't coincide. To reduce that, we can chunk a scene into smaller regions, which therefore have less extraneous compute, and then merge.
This is already completed for aggregation (#52), but as a separate method. Could integrate into one method, as a user-specified option. Also needs to be implemented for rendering. In future refactors where a centralized pix2face method exists, this could all be implemented there.