Closed Bingrong89 closed 2 years ago
Can you attach the epipolar line visualization, as well as camera and point cloud visualization as in NeRF++ here for your data?
Nothing special was done to the images; the images were captured by an iPhone, and then randomly split into train/test splits.
This is the epipolar line visualization and camera visualization from my own dataset. This is the output i got from IRON.
I also ran the same for buddha head, which I trained successfully on. Note that for camera visualization, I used the mesh file and camera path file from Nerf++, I only changed the camera json files for train and test.
Hi, sorry I think I figured out the problem? Colored background appearing in a number of the images used. Removed them and got a somewhat decent output, although I am still observing nan values in stage 2 training.
Hello! I tried to run IRON on my own dataset however it is not producing any viable results. I assumed you were using NeRF++ for producing the cam_dict_norm.json, however there is very little detail on exactly how you processed the images obtained from camera. Could you provide more details on the steps you took to process the images to create the train/test folders?
Hi, I used the code "run_colmap.py" in Nerf++ to produce the cam_dict_norm.json, but the result was always bad, even for the given dataset. For example, for the given data, the steps I took is:
But the result of normal.png after 15000 iters was still bad: while for the original "Xmen" data, the normal.png result is always good even after 2500 iters.
I'm a beginner in this field, and have been troubled by this problem for a long time, could you help me with it? Do you know which part could be wrong? @Kai-46 @Bingrong89
You can find more details in #15, That would be very great help for me!!
Hello! I tried to run IRON on my own dataset however it is not producing any viable results. I assumed you were using NeRF++ for producing the cam_dict_norm.json, however there is very little detail on exactly how you processed the images obtained from camera. Could you provide more details on the steps you took to process the images to create the train/test folders?