kwea123 / nerf_pl

NeRF (Neural Radiance Fields) and NeRF in the Wild using pytorch-lightning
https://www.youtube.com/playlist?list=PLDV2CyUo4q-K02pNEyDr7DYpTQuka3mbV
MIT License
2.74k stars 483 forks source link

360 with own data not working #111

Open Borailuce99 opened 3 years ago

Borailuce99 commented 3 years ago

Describe the bug I'm trying to prepare and train a model with my own dataset of images but I'm having some troubles for 360º. On the one hand, there are some cases where, using the collab code of COLMAP, it appears an error about not finding the poses of the camera. Is this because of the images are wrong?

On the other hand, with the dataset and COLMAP executed correctly, I train the model with that files but, when I run the "eval.py" to check it, the images are not of the object, just some black or white images with some noise. Here I share some images used for the training.

frame14 frame112 frame224

And the resulting images are all like these: 008 042 084 000

Finally, I don't know where the problem is so I would like to know if it's just that the --spheric is not working well or some problems with the original images.

Which branch you use I'm currently working with the dev branch.

BarnabasTakacs commented 3 years ago

Hello,

Great work, congratulations! I could not get the inward facing 360 running either although I am sure it is some minor issue. I have 360 cameras facing towards the center in a circle. If I run COLMAP reconstruction it works fine, but the training does not converge and gives similar errors as above. I am attaching some shots, can also send the images if you want me to. Any help would be appreciated

snapshot00

Clipboard02 . Thank you Barnabas

qhdqhd commented 2 years ago

You use data set of multi-view. Many different cameras are used, so the intrinsic parameters of the cameras are different. Nerf doesn't seem to support multiple intrinsic parameters at present, does it? @BarnabasTakacs

imadgohar commented 2 years ago

@Borailuce99 Can you please check this error and guide what I am missing here, While executing the "360 inward facing scene" I get this error:

Aspect ratio need to set but when I keep the original input size as H and W It just stop working after 2 to 3 min.

/content/nerf_pl INFO:numexpr.utils:NumExpr defaulting to 2 threads. INFO:lightning:GPU available: True, used: True INFO:lightning:CUDA_VISIBLE_DEVICES: [0] Traceback (most recent call last): File "train.py", line 180, in trainer.fit(system) File "/usr/local/lib/python3.7/dist-packages/pytorch_lightning/trainer/trainer.py", line 730, in fit model.prepare_data() File "train.py", line 80, in prepare_data self.train_dataset = dataset(split='train', **kwargs) File "/content/nerf_pl/datasets/llff.py", line 173, in init self.read_meta() File "/content/nerf_pl/datasets/llff.py", line 191, in read_meta f'You must set @img_wh to have the same aspect ratio as ({W}, {H}) !' AssertionError: You must set @img_wh to have the same aspect ratio as (5280.0, 2970.0) !

@BarnabasTakacs @qhdqhd

cocoshe commented 2 years ago

@Borailuce99 Can you please check this error and guide what I am missing here, While executing the "360 inward facing scene" I get this error:

Aspect ratio need to set but when I keep the original input size as H and W It just stop working after 2 to 3 min.

/content/nerf_pl INFO:numexpr.utils:NumExpr defaulting to 2 threads. INFO:lightning:GPU available: True, used: True INFO:lightning:CUDA_VISIBLE_DEVICES: [0] Traceback (most recent call last): File "train.py", line 180, in trainer.fit(system) File "/usr/local/lib/python3.7/dist-packages/pytorch_lightning/trainer/trainer.py", line 730, in fit model.prepare_data() File "train.py", line 80, in prepare_data self.train_dataset = dataset(split='train', kwargs) File "/content/nerf_pl/datasets/llff.py", line 173, in init** self.read_meta() File "/content/nerf_pl/datasets/llff.py", line 191, in read_meta f'You must set @img_wh to have the same aspect ratio as ({W}, {H}) !' AssertionError: You must set @img_wh to have the same aspect ratio as (5280.0, 2970.0) !

@BarnabasTakacs @qhdqhd

When you run "python train.py ......", set the "--img_wh" param to have the same aspect ratio as (5280.0, 2970.0), for example, "--img_wh 2640 1485"(half of 5280 and 2790), you need to do this cus your train images are this width-height ratio. Have a try : )

GabrielePaolini commented 2 years ago

Describe the bug I'm trying to prepare and train a model with my own dataset of images but I'm having some troubles for 360º. On the one hand, there are some cases where, using the collab code of COLMAP, it appears an error about not finding the poses of the camera. Is this because of the images are wrong?

On the other hand, with the dataset and COLMAP executed correctly, I train the model with that files but, when I run the "eval.py" to check it, the images are not of the object, just some black or white images with some noise. Here I share some images used for the training.

frame14 frame112 frame224

And the resulting images are all like these: 008 042 084 000

Finally, I don't know where the problem is so I would like to know if it's just that the --spheric is not working well or some problems with the original images.

Which branch you use I'm currently working with the dev branch.

Were you able to solve the problem? I have the same issue with 360 images...