KeKsBoTer / web-splat

3D Gaussian Splatting Renderer implemented in WebGPU (WGPU) and Rust
https://web-splat.niedermayr.dev
116 stars 10 forks source link

How to solve the problem of having 20M + points #8

Open meidachen opened 5 months ago

meidachen commented 5 months ago

Thank you for the great work. I'm trying your work on compressing my large-scale scene, everything works great until I try to visualize it using the viewer. It seems like there are too many points to be rendered using the current implementation. specifically, the error/limitation occurred as: Caused by: In a ComputePass note: encoder = render command encoder In a dispatch command, indirect:false note: compute pipeline = preprocess pipeline Each current dispatch group size dimension ([153754, 1, 1]) must be less or equal to 65535

Which originated from render.rs line 409: let wgs_x = (pc.num_points() as f32 / 256.0).ceil() as u32; pass.dispatch_workgroups(wgs_x, 1, 1);

Is there any work around for this to handle more points in the viewer?

Thank you in advance for the help!

KeKsBoTer commented 5 months ago

Hello,

thanks for your interest! I hope this fixes your issue:

You have to increase the limit for max_compute_workgroups_per_dimension. How high you can set it depends on the limit of your GPU driver.

For the renderer you can edit the limits here: https://github.com/KeKsBoTer/web-splat/blob/5dffdc8b259c8ecda791ed4c3aa12a154b52dbea/src/lib.rs#L100

If this does not solve the problem one would need to invoke the shader multiple times (which would require some rework of rust and shader code)

meidachen commented 5 months ago

@KeKsBoTer , thanks for your response, it seems like by default max_compute_workgroups_per_dimension is already at the maximum. What would you suggest to look into? Does this mean the data itself also needs to be chunked so that the shader can focus on each chunk and eventually merge results?