yzslab / gaussian-splatting-lightning

A 3D Gaussian Splatting framework with various derived algorithms and an interactive web viewer
Other
528 stars 43 forks source link

issue in training close range images shots #5

Closed antoinebio closed 11 months ago

antoinebio commented 1 year ago

Hi , I trained many UAV dateset where target objets are around 20 meters away or farther gaussian-splatting-lighting produce awsome results, very impressive.

but I face another issue when I train ground base imageries. It's not from 360° action camera. the setup of the camera device is like that:

2 cameras, 4 meter height from the ground: 45° degree tilt from the vertical (nadir). 1 camera focused forward to the pedestrian walk second camera focused backward to the pedrestrian walk. lots of pictures taken but only focused on the ground. very good overlaps between pictures and bundle bloc adjustment is as accurate as my UAV dataset (what photogrammetry does and also called aerotriangulation).

We use to play with SFM algorithms such like what metashape offers. I am comparing that software vs gaussian-splatting 3D render.

I tried a first training with gaussian-splatting-lighting until the end, ie. 30 000 iterations but when I start the training I've got that error below

image

does it come from the datasets and too closed cameras ? (2 cameras poses per meter ; 1 foward / 1 backward)

just to let you know the training of that same dataset worked with https://github.com/jonstephens85/gaussian-splatting-Windows here is what that gaussian splatting gives me

image

but render is pretty bad not what I have got with UAV dataset

On my render I've got too many noisy splats

image

image

and ellipsoids display

image

and last the initial pointcloud layout

image

antoinebio commented 12 months ago

Is it possible to limit the number of trained images from my source folder (at random) and keep doing more epoch instead ?

Following that principle

https://stackoverflow.com/questions/4752626/epoch-vs-iteration-when-training-neural-networks

Or

https://stats.stackexchange.com/questions/164876/what-is-the-trade-off-between-batch-size-and-number-of-iterations-to-train-a-neu

Maybe this is not the way gaussian splatting is performing well ?

yzslab commented 12 months ago

Is it possible to limit the number of trained images from my source folder (at random) and keep doing more epoch instead ?

You can use image list. https://github.com/yzslab/gaussian-splatting-lightning/blob/main/generate_image_list.py --data.params.colmap.image_list ...

yzslab commented 12 months ago

A txt file contains the image filenames that you want to use.

antoinebio commented 12 months ago

mogrify -quality 100 -resize 25% -path images_4/ images/*

I cannot make imageMagick working.... image

will try the image list (images choosen at random ...)

yzslab commented 12 months ago

mogrify -quality 100 -resize 25% -path images_4/ images/*

I cannot make imageMagick working.... image

will try the image list (images choosen at random ...)

Append "\*" to the source image path. Or use this: https://github.com/yzslab/gaussian-splatting-lightning/blob/main/image_downsample.py

antoinebio commented 12 months ago

A txt file contains the image filenames that you want to use.

like that way ?

image

but the command doesn't accept it apparently ...

image

antoinebio commented 12 months ago

A txt file contains the image filenames that you want to use.

like that way ?

image

ok not with the path but just with the name

image

and it's working...

antoinebio commented 12 months ago

I selected 400 photos at random from the source folder and using --data.params.colmap.image_list ...

result of that new train (took 2hours for training until end at 30K iterations)

image

the render is not as good as when I used full image dataset.

image

image

let me ask : when I used an image list I assume this is the only images used for training but is it also the dataset for eval ? Or does it use images from my initial source for evaluation ?

also, does it worth using that config file larger_dataset.yaml if I work with a subsample image list ?

antoinebio commented 12 months ago

using a lower densify_until_iter: 3000

training (on the full dataset ie. 1600 images) took 30min and is much better...

image

antoinebio commented 12 months ago

another exemple with a large dataset (2127 images) where I want to test again to tweek that value of densify_until_iter

But I still got that issue (using torch.round)

image

and that error when using torch.floor

image

images image

images_4 image

images_8 image

antoinebio commented 12 months ago

Hi @yzslab As I am testing more and more close range datasets I am getting stuck with those tensor a error vs tensor b. it occurs at random on my tested project (around 10 different projects), sometimes the agitonerf.py script will produce the right input (sparse and images folders) sometimes not.

nerfstudio doesn't failed (when preparing dataset with the following command)

ns-process-data metashape --data {data directory} --xml {xml file} --output-dir {output directory}

and right after that step during the training (no tensor error).

image

image

by the way it is possible for your gaussian splatting repos to retrain a nerfacto model (or any kind of model from nerftudio) ?? like that image

should I specify again the source images folder ? I fear I will face again the same issue reported in that post...

antoinebio commented 12 months ago

is it possible to start training with argument --resolution 4 to speed up processing then upsample twice after 250 or 500 iterations and resume the training.

should I do that in 2 steps ?

first python main.py fit -s --resolution 4 --iterations 5000 --checkpoint_iterations 5000

in order to get a checkpoint after 5000 iterations

then resume with full def images python main.py fit --config ... --data.path --ckpt_path CKPT_FILE_PATH.

yzslab commented 12 months ago

is it possible to start training with argument --resolution 4 to speed up processing then upsample twice after 250 or 500 iterations and resume the training.

should I do that in 2 steps ?

first python main.py fit -s --resolution 4 --iterations 5000 --checkpoint_iterations 5000

in order to get a checkpoint after 5000 iterations

then resume with full def images python main.py fit --config ... --data.path --ckpt_path CKPT_FILE_PATH.

It should work theoretically.

yzslab commented 12 months ago

should I do that in 2 steps ?

You have to upsample manually.

antoinebio commented 11 months ago

@yzslab , Hi just a quick and last question regarding a former request in that post (and I will close that post as lots of info have been gathered thanks to your answers).

Is it possible to started training with GS lightning from a NERF COLMAP prepared model. I mean Images (and downscaled images_4 , etc... folder). and transforms.json file corresponding to cam pose ?

image

If yes SHould I worked only with pinholed or fisheye cam model ? I guess yes, I noticed that Gaussian Splatting doesn't manage equirectangular images input...

what about that alternative way (to train with GS a pre-train NERFSTUDIO) ?

by the way it is possible for your gaussian splatting repos to retrain a nerfacto model (or any kind of model from nerftudio) ?? like that image

yzslab commented 11 months ago

Is it possible to started training with GS lightning from a NERF COLMAP prepared model.

I haven't implemented it yet. Without sparse point cloud will led to degraded performance.

what about that alternative way (to train with GS a pre-train NERFSTUDIO) ?

GS can not be initialized from NeRF.