Kai-46 / IRON

Inverse rendering by optimizing neural SDF and materials from photometric images
BSD 2-Clause "Simplified" License
299 stars 24 forks source link

About preprocessing pipeline #9

Closed Bingrong89 closed 2 years ago

Bingrong89 commented 2 years ago

Hello! I tried to run IRON on my own dataset however it is not producing any viable results. I assumed you were using NeRF++ for producing the cam_dict_norm.json, however there is very little detail on exactly how you processed the images obtained from camera. Could you provide more details on the steps you took to process the images to create the train/test folders?

Kai-46 commented 2 years ago

Can you attach the epipolar line visualization, as well as camera and point cloud visualization as in NeRF++ here for your data?

Nothing special was done to the images; the images were captured by an iPhone, and then randomly split into train/test splits.

Bingrong89 commented 2 years ago

cam_vis_doraemon epipolar_doraemon This is the epipolar line visualization and camera visualization from my own dataset. snapshot00 This is the output i got from IRON.

cam_vis_buddhahead epipolar_buddhahead I also ran the same for buddha head, which I trained successfully on. Note that for camera visualization, I used the mesh file and camera path file from Nerf++, I only changed the camera json files for train and test.

Bingrong89 commented 2 years ago

Hi, sorry I think I figured out the problem? Colored background appearing in a number of the images used. Removed them and got a somewhat decent output, although I am still observing nan values in stage 2 training.

changfali commented 2 years ago

Hello! I tried to run IRON on my own dataset however it is not producing any viable results. I assumed you were using NeRF++ for producing the cam_dict_norm.json, however there is very little detail on exactly how you processed the images obtained from camera. Could you provide more details on the steps you took to process the images to create the train/test folders?

Hi, I used the code "run_colmap.py" in Nerf++ to produce the cam_dict_norm.json, but the result was always bad, even for the given dataset. For example, for the given data, the steps I took is:

  1. Used all the images in train/test to run the code "run_colmap.py";
  2. Split the images in "xmen/mvs/image"(this is the result of "run_colmap.py") to train and test by myself
  3. Used the json file: xmen/posed_images/kai_cameras_normalized.json to train Iron.

But the result of normal.png after 15000 iters was still bad: image while for the original "Xmen" data, the normal.png result is always good even after 2500 iters.

I'm a beginner in this field, and have been troubled by this problem for a long time, could you help me with it? Do you know which part could be wrong? @Kai-46 @Bingrong89

changfali commented 2 years ago

You can find more details in #15, That would be very great help for me!!