Closed antoinebio closed 11 months ago
Is it possible to limit the number of trained images from my source folder (at random) and keep doing more epoch instead ?
Following that principle
https://stackoverflow.com/questions/4752626/epoch-vs-iteration-when-training-neural-networks
Or
Maybe this is not the way gaussian splatting is performing well ?
Is it possible to limit the number of trained images from my source folder (at random) and keep doing more epoch instead ?
You can use image list.
https://github.com/yzslab/gaussian-splatting-lightning/blob/main/generate_image_list.py
--data.params.colmap.image_list ...
A txt file contains the image filenames that you want to use.
mogrify -quality 100 -resize 25% -path images_4/ images/*
I cannot make imageMagick working....
will try the image list (images choosen at random ...)
mogrify -quality 100 -resize 25% -path images_4/ images/*
I cannot make imageMagick working....
will try the image list (images choosen at random ...)
Append "\*" to the source image path. Or use this: https://github.com/yzslab/gaussian-splatting-lightning/blob/main/image_downsample.py
A txt file contains the image filenames that you want to use.
like that way ?
but the command doesn't accept it apparently ...
A txt file contains the image filenames that you want to use.
like that way ?
ok not with the path but just with the name
and it's working...
I selected 400 photos at random from the source folder and using --data.params.colmap.image_list ...
result of that new train (took 2hours for training until end at 30K iterations)
the render is not as good as when I used full image dataset.
let me ask : when I used an image list I assume this is the only images used for training but is it also the dataset for eval ? Or does it use images from my initial source for evaluation ?
also, does it worth using that config file larger_dataset.yaml if I work with a subsample image list ?
using a lower densify_until_iter: 3000
training (on the full dataset ie. 1600 images) took 30min and is much better...
another exemple with a large dataset (2127 images) where I want to test again to tweek that value of densify_until_iter
But I still got that issue (using torch.round)
and that error when using torch.floor
images
images_4
images_8
Hi @yzslab As I am testing more and more close range datasets I am getting stuck with those tensor a error vs tensor b. it occurs at random on my tested project (around 10 different projects), sometimes the agitonerf.py script will produce the right input (sparse and images folders) sometimes not.
nerfstudio doesn't failed (when preparing dataset with the following command)
ns-process-data metashape --data {data directory} --xml {xml file} --output-dir {output directory}
and right after that step during the training (no tensor error).
by the way it is possible for your gaussian splatting repos to retrain a nerfacto model (or any kind of model from nerftudio) ?? like that
should I specify again the source images folder ? I fear I will face again the same issue reported in that post...
is it possible to start training with argument --resolution 4 to speed up processing then upsample twice after 250 or 500 iterations and resume the training.
should I do that in 2 steps ?
first
python main.py fit -s
in order to get a checkpoint after 5000 iterations
then resume with full def images
python main.py fit --config ... --data.path
is it possible to start training with argument --resolution 4 to speed up processing then upsample twice after 250 or 500 iterations and resume the training.
should I do that in 2 steps ?
first python main.py fit -s
--resolution 4 --iterations 5000 --checkpoint_iterations 5000 in order to get a checkpoint after 5000 iterations
then resume with full def images python main.py fit --config ... --data.path
--ckpt_path CKPT_FILE_PATH.
It should work theoretically.
should I do that in 2 steps ?
You have to upsample manually.
@yzslab , Hi just a quick and last question regarding a former request in that post (and I will close that post as lots of info have been gathered thanks to your answers).
Is it possible to started training with GS lightning from a NERF COLMAP prepared model. I mean Images (and downscaled images_4 , etc... folder). and transforms.json file corresponding to cam pose ?
If yes SHould I worked only with pinholed or fisheye cam model ? I guess yes, I noticed that Gaussian Splatting doesn't manage equirectangular images input...
what about that alternative way (to train with GS a pre-train NERFSTUDIO) ?
by the way it is possible for your gaussian splatting repos to retrain a nerfacto model (or any kind of model from nerftudio) ?? like that
Is it possible to started training with GS lightning from a NERF COLMAP prepared model.
I haven't implemented it yet. Without sparse point cloud will led to degraded performance.
what about that alternative way (to train with GS a pre-train NERFSTUDIO) ?
GS can not be initialized from NeRF.
Hi , I trained many UAV dateset where target objets are around 20 meters away or farther gaussian-splatting-lighting produce awsome results, very impressive.
but I face another issue when I train ground base imageries. It's not from 360° action camera. the setup of the camera device is like that:
2 cameras, 4 meter height from the ground: 45° degree tilt from the vertical (nadir). 1 camera focused forward to the pedestrian walk second camera focused backward to the pedrestrian walk. lots of pictures taken but only focused on the ground. very good overlaps between pictures and bundle bloc adjustment is as accurate as my UAV dataset (what photogrammetry does and also called aerotriangulation).
We use to play with SFM algorithms such like what metashape offers. I am comparing that software vs gaussian-splatting 3D render.
I tried a first training with gaussian-splatting-lighting until the end, ie. 30 000 iterations but when I start the training I've got that error below
does it come from the datasets and too closed cameras ? (2 cameras poses per meter ; 1 foward / 1 backward)
just to let you know the training of that same dataset worked with https://github.com/jonstephens85/gaussian-splatting-Windows here is what that gaussian splatting gives me
but render is pretty bad not what I have got with UAV dataset
On my render I've got too many noisy splats
and ellipsoids display
and last the initial pointcloud layout