google / stereo-magnification

Code accompanying the SIGGRAPH 2018 paper "Stereo Magnification: Learning View Synthesis using Multiplane Images"
https://people.eecs.berkeley.edu/~tinghuiz/projects/mpi/
Apache License 2.0
389 stars 87 forks source link

How to set parameters for light field data? #26

Open jingjin25 opened 4 years ago

jingjin25 commented 4 years ago

I used the following commands to test the trained model on HCI old dataset: python ./mpi_from_images.py \ --image1=examples/HCI_old/buddha/view_3_4.png \ --image2=examples/HCI_old/buddha/view_5_4.png \ --output_dir=examples/HCI_old/buddha/results \ --yshift=45 \ --fx=9.3750 \ --fy=9.3750 \ --yoffset=0.125 \ --render_multiples=-1,-0.5,0,0.5,1,1.5,2 \ --render

where yshift = 2shift, fx=fy=focalLength, yoffset=2b, and shift, focalLength and b are provided by the lf.h5 file.

The results are quite undesirable. Does anyone help me?

reyet commented 4 years ago

Hi – I think the issue is with the mpi plane depths. The range of depths in this data is narrow and very different from the real estate data, so it's better to choose min_depth and max_depth to fit the scene.

Looking back, I see that I ran it like this:

mpi_from_images \
   --image1=$HCI/buddha/image_04_04.png \
   --image2=$HCI/buddha/image_05_04.png \
   --output_dir=$OUTPUT/buddha \
   --xshift=23 --fx=9.375 --fy=9.375 --xoffset=48 \
   --min_depth=14000 --max_depth=16000   --render

N.B. this is using a different input pair (two images horizontally adjacent) and my files were named differently, but you could make the appropriate adjustments.

It might seem odd that my xoffset is so big compared to your yoffset. But really what's important is the ratio of the plane depths to the offsets. So I think you can scale xoffset and yoffset up or down as much as you like, but you have to apply the same scaling to min_depth and max_depth.

I hope this helps!

jingjin25 commented 4 years ago

@reyet Many thanks! It really helps!

jingjin25 commented 3 years ago

@reyet Hi, by re-reading the code, I think to test on a light field image with the disparity range of [-d, d], the parameters only need to satisfy the following relations: fx width xoffset / min_depth = xshift + d fx width xoffset / max_depth = xshift - d

Is it correct? And if there is any other condition to influence the performance?

Thanks!