google-research / multinerf

A Code Release for Mip-NeRF 360, Ref-NeRF, and RawNeRF
Apache License 2.0
3.57k stars 338 forks source link

Not good reconstructed image when using custom data #97

Open apple3285 opened 1 year ago

apple3285 commented 1 year ago

Hello, thank you for the awesome work!!:)

My Mip-NeRF 360 training results are not good.

Could you suggest what to do to get good results?

Current Experiment Method

Usage data

Experiment method

Followed multinerf Readme Using your own data. part. 360.gin used. (Does not modify code or config files)

Result

image image

Attempts to solve

  1. When visualizing the camera pose extracted by colmap, it was confirmed that the camera pose results were similar to my actual camera position.
  2. Changed the Camera Model when running colmap or tried using "sequential" in Feature matching, but similar results
  3. The result of changing near/far/forward_facing/batch size/render_camtype/render_dist_percentile in 360.gin is also not good.
ouyangjiacs commented 1 year ago

I use my own raw data set for training, and the rendered image is not only blurred, but also very strange in color. Is the "ColorMatrix2" parameter of the json file arranged in the wrong way? Or is the pose calculated by colmap(bash scripts/local_colmap_and_resize.sh ${DATA_DIR}) wrong?What is the problem? the render image: color_000 the origin image (rgb)(for see): 1003-0001-4757501

SamiAouad commented 1 year ago

I have a similar issue did you find any solutions?