Closed axibo-reiner closed 1 month ago
Hi, theorytically this should not happen. Did you load it with Colmap dataparser instead of the Blender one when the size mismatch happened?
On Tue, Oct 29, 2024 at 05:43 Reiner Schmidt @.***> wrote:
With the lego blender dataset out of the box i get this result. However if I run it through colmap again it works without issue.
cache all images cache all images 0it [00:00, ?it/s]
648029 caching images (1st: 0):
100%|████████████████████████████████████████████████████████| 13/13 [00:00<00:00, 29.50it/s]
648251 caching images (1st: 0):
100%|████████████████████████████████████████████████████████| 13/13 [00:00<00:00, 27.29it/s]
648252 caching images (1st: 0):
100%|████████████████████████████████████████████████████████| 13/13 [00:00<00:00, 27.22it/s] /home/axibo/anaconda3/envs/gspl/lib/python3.9/site-packages/lightning/pytorch/trainer/connectors/data_connector.py:442: PossibleUserWarning: The dataloader, val_dataloader, does not have many workers which may be a bottleneck. Consider increasing the value of the num_workers argument(try 64 which is the number of cpus on this machine) in theDataLoaderinit to improve performance. rank_zero_warn( /home/axibo/anaconda3/envs/gspl/lib/python3.9/site-packages/lightning/pytorch/trainer/connectors/data_connector.py:378: UserWarning: You have overriddentransfer_batch_to_deviceinLightningModulebut have passed in aLightningDataModule. It will use the implementation from LightningModule` instance. warning_cache.warn( Traceback (most recent call last): File "/home/axibo/gaussian-splatting-lightning/main.py", line 4, in | 0/1 [00:00<?, ?it/s] cli() File "/home/axibo/gaussian-splatting-lightning/internal/entrypoints/gspl.py", line 12, in cli CLI( File "/home/axibo/anaconda3/envs/gspl/lib/python3.9/site-packages/lightning/pytorch/cli.py", line 359, in init self._run_subcommand(self.subcommand) File "/home/axibo/anaconda3/envs/gspl/lib/python3.9/site-packages/lightning/pytorch/cli.py", line 650, in _run_subcommand fn(**fn_kwargs) File "/home/axibo/anaconda3/envs/gspl/lib/python3.9/site-packages/lightning/pytorch/trainer/trainer.py", line 532, in fit call._call_and_handle_interrupt( File "/home/axibo/anaconda3/envs/gspl/lib/python3.9/site-packages/lightning/pytorch/trainer/call.py", line 42, in _call_and_handle_interrupt THIS GOES ON WITH SOME THREADS EXISTING
FINAL ERROR BEFORE EXIT RuntimeError: The size of tensor a (790) must match the size of tensor b (800) at non-singleton dimension 2
— Reply to this email directly, view it on GitHub https://github.com/yzslab/gaussian-splatting-lightning/issues/75, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAEJZCJRLT6FML3PL7GWSQTZ52VW7AVCNFSM6AAAAABQYLXO2GVHI2DSMVQWIX3LMV43ASLTON2WKOZSGYYTSNRSGI4TQMI . You are receiving this because you are subscribed to this thread.Message ID: @.***>
That size mismatch error often happens when the colmap sparse model does not match the image set. For example, the sparse model is the undistorted one but the images are not.
I think there is a sparse
directory in your Lego's directory. It will select Colmap dataparser if a sparse
directory exists when you have not specified the dataset type:
https://github.com/yzslab/gaussian-splatting-lightning/blob/3e3d6701a8562a46d68883763f58ff5e77eb6104/internal/dataset.py#L349-L355
and this sparse model does not match your images, so the error occurred.
Although you eventually fixed the error by running the colmap sparse reconstruction again, the Blender dataparser should be used instead of the colmap one. In your case, you need to explicitly specify the dataset type with this option: --data.parser Blender
.
Perfect, thanks.
With the lego blender dataset out of the box i get this result. However if I run it through colmap again it works without issue.
cache all images cache all images 0it [00:00, ?it/s]
648029 caching images (1st: 0): 100%|████████████████████████████████████████████████████████| 13/13 [00:00<00:00, 29.50it/s]
648251 caching images (1st: 0): 100%|████████████████████████████████████████████████████████| 13/13 [00:00<00:00, 27.29it/s]
648252 caching images (1st: 0): 100%|████████████████████████████████████████████████████████| 13/13 [00:00<00:00, 27.22it/s]
/home/axibo/anaconda3/envs/gspl/lib/python3.9/site-packages/lightning/pytorch/trainer/connectors/data_connector.py:442: PossibleUserWarning: The dataloader, val_dataloader, does not have many workers which may be a bottleneck. Consider increasing the value of the | 0/1 [00:00<?, ?it/s]
cli()
File "/home/axibo/gaussian-splatting-lightning/internal/entrypoints/gspl.py", line 12, in cli
CLI(
File "/home/axibo/anaconda3/envs/gspl/lib/python3.9/site-packages/lightning/pytorch/cli.py", line 359, in init
self._run_subcommand(self.subcommand)
File "/home/axibo/anaconda3/envs/gspl/lib/python3.9/site-packages/lightning/pytorch/cli.py", line 650, in _run_subcommand
fn(**fn_kwargs)
File "/home/axibo/anaconda3/envs/gspl/lib/python3.9/site-packages/lightning/pytorch/trainer/trainer.py", line 532, in fit
call._call_and_handle_interrupt(
File "/home/axibo/anaconda3/envs/gspl/lib/python3.9/site-packages/lightning/pytorch/trainer/call.py", line 42, in _call_and_handle_interrupt
THIS GOES ON WITH SOME THREADS EXISTING
num_workers
argument(try 64 which is the number of cpus on this machine) in the
DataLoaderinit to improve performance. rank_zero_warn( /home/axibo/anaconda3/envs/gspl/lib/python3.9/site-packages/lightning/pytorch/trainer/connectors/data_connector.py:378: UserWarning: You have overridden
transfer_batch_to_devicein
LightningModulebut have passed in a
LightningDataModule. It will use the implementation from
LightningModule` instance. warning_cache.warn( Traceback (most recent call last): File "/home/axibo/gaussian-splatting-lightning/main.py", line 4, inFINAL ERROR BEFORE EXIT RuntimeError: The size of tensor a (790) must match the size of tensor b (800) at non-singleton dimension 2