graphdeco-inria / gaussian-splatting

Original reference implementation of "3D Gaussian Splatting for Real-Time Radiance Field Rendering"
https://repo-sam.inria.fr/fungraph/3d-gaussian-splatting/
Other
12.51k stars 1.56k forks source link

High resolution image training by dividing image patchs. #834

Open ZZy129326999 opened 1 month ago

ZZy129326999 commented 1 month ago

Excuse me. By training the high-resolution image in some blocks and using corresponding patch offsets of cx and cy, the rendering effect looks correct, but why do the dense points become much more than the original resolution image?

hot-dog commented 1 month ago

Hi ZZy129326999, i am also trying to train gaussian splatting with cropped image, but cannot get good result, may i ask how do you train with divided image patches?

ZZy129326999 commented 1 month ago

Hi ZZy129326999, i am also trying to train gaussian splatting with cropped image, but cannot get good result, may i ask how do you train with divided image patches?

projection matrix may is not correct.

hot-dog commented 1 month ago

Hi ZZy129326999, i am also trying to train gaussian splatting with cropped image, but cannot get good result, may i ask how do you train with divided image patches?

projection matrix may is not correct.

Yeah, i also think the projection matrix cause the quastion but i dont konw how to construct the correct projection matrix. Could you show me your code for constructing the projection matrix? Thank you:)

jaco001 commented 1 month ago

High density is a good thing. The overall density depends on the size of the scene and the amount of detail. If you are taking photos of a single element, taking a long perspective reduces the details. (algorithm averages density). An array of photos from a single photo can still generate more points because the algorithm uses all the photos at the same time and 'looks' for common points. (there is noise, perspective imperfections, number rounds, etc.) More steps also generate more points.

jaco001 commented 1 month ago

Hi ZZy129326999, i am also trying to train gaussian splatting with cropped image, but cannot get good result, may i ask how do you train with divided image patches?

Maybe masks will be better solution than cropping?

hot-dog commented 1 month ago

Hi ZZy129326999, i am also trying to train gaussian splatting with cropped image, but cannot get good result, may i ask how do you train with divided image patches?

Maybe masks will be better solution than cropping?

Thanks for your advice, i have applied mask in my training. I construct my training data as follows: given a large scene(200meters x 200meters), take thounds of images with UAV(the gsd is about 5mm), align all images with colmap to get sfm points, then split the sfm points to small patchs(e.g 7m x 7m ), for a certain sfm points patch, constructing a bounding box with its xyz's range, then applying ray casting with each camera to get the foreground mask and filter out cameras that do not cover the specified area, at last crop the foreground with rectangle box and modify the cx, cy accordingly. I construct the projection matrix according to https://github.com/graphdeco-inria/gaussian-splatting/issues/399#issuecomment-1862107712, the result is better but there is another question as the following image show, there are lots of gaussian out side of the scene. Why? image If i dont crop the image, i can get quiet good result. image

jaco001 commented 1 month ago

Even if you think that camera aligment is perfect, this isn't trivial task, so often you get some of them in different places image

Just one camera can generate splats at the other side. Croping is ok in this case, but also you can loose some data. Try this approach https://github.com/kangpeilun/VastGaussian

TarzanZhao commented 4 days ago

If you have multiple GPU, Grendel-GS can support high-resolution reconstruction by dividing image patches: https://github.com/nyu-systems/Grendel-GS