Open Borailuce99 opened 3 years ago
Hello,
Great work, congratulations! I could not get the inward facing 360 running either although I am sure it is some minor issue. I have 360 cameras facing towards the center in a circle. If I run COLMAP reconstruction it works fine, but the training does not converge and gives similar errors as above. I am attaching some shots, can also send the images if you want me to. Any help would be appreciated
. Thank you Barnabas
You use data set of multi-view. Many different cameras are used, so the intrinsic parameters of the cameras are different. Nerf doesn't seem to support multiple intrinsic parameters at present, does it? @BarnabasTakacs
@Borailuce99 Can you please check this error and guide what I am missing here, While executing the "360 inward facing scene" I get this error:
Aspect ratio need to set but when I keep the original input size as H and W It just stop working after 2 to 3 min.
/content/nerf_pl
INFO:numexpr.utils:NumExpr defaulting to 2 threads.
INFO:lightning:GPU available: True, used: True
INFO:lightning:CUDA_VISIBLE_DEVICES: [0]
Traceback (most recent call last):
File "train.py", line 180, in
@BarnabasTakacs @qhdqhd
@Borailuce99 Can you please check this error and guide what I am missing here, While executing the "360 inward facing scene" I get this error:
Aspect ratio need to set but when I keep the original input size as H and W It just stop working after 2 to 3 min.
/content/nerf_pl INFO:numexpr.utils:NumExpr defaulting to 2 threads. INFO:lightning:GPU available: True, used: True INFO:lightning:CUDA_VISIBLE_DEVICES: [0] Traceback (most recent call last): File "train.py", line 180, in trainer.fit(system) File "/usr/local/lib/python3.7/dist-packages/pytorch_lightning/trainer/trainer.py", line 730, in fit model.prepare_data() File "train.py", line 80, in prepare_data self.train_dataset = dataset(split='train', kwargs) File "/content/nerf_pl/datasets/llff.py", line 173, in init** self.read_meta() File "/content/nerf_pl/datasets/llff.py", line 191, in read_meta f'You must set @img_wh to have the same aspect ratio as ({W}, {H}) !' AssertionError: You must set @img_wh to have the same aspect ratio as (5280.0, 2970.0) !
@BarnabasTakacs @qhdqhd
When you run "python train.py ......", set the "--img_wh" param to have the same aspect ratio as (5280.0, 2970.0), for example, "--img_wh 2640 1485"(half of 5280 and 2790), you need to do this cus your train images are this width-height ratio. Have a try : )
Describe the bug I'm trying to prepare and train a model with my own dataset of images but I'm having some troubles for 360º. On the one hand, there are some cases where, using the collab code of COLMAP, it appears an error about not finding the poses of the camera. Is this because of the images are wrong?
On the other hand, with the dataset and COLMAP executed correctly, I train the model with that files but, when I run the "eval.py" to check it, the images are not of the object, just some black or white images with some noise. Here I share some images used for the training.
And the resulting images are all like these:
Finally, I don't know where the problem is so I would like to know if it's just that the --spheric is not working well or some problems with the original images.
Which branch you use I'm currently working with the dev branch.
Were you able to solve the problem? I have the same issue with 360 images...
Describe the bug I'm trying to prepare and train a model with my own dataset of images but I'm having some troubles for 360º. On the one hand, there are some cases where, using the collab code of COLMAP, it appears an error about not finding the poses of the camera. Is this because of the images are wrong?
On the other hand, with the dataset and COLMAP executed correctly, I train the model with that files but, when I run the "eval.py" to check it, the images are not of the object, just some black or white images with some noise. Here I share some images used for the training.
And the resulting images are all like these:
Finally, I don't know where the problem is so I would like to know if it's just that the --spheric is not working well or some problems with the original images.
Which branch you use I'm currently working with the dev branch.