isl-org / StableViewSynthesis

MIT License
211 stars 34 forks source link

some questions about test process #5

Closed visonpon closed 3 years ago

visonpon commented 3 years ago

Hi @griegler, thanks for your great work, I have some questions about the test process, hope you can help.

First, I use the interpolate_wapoints in thecreate_custom_track.py to get a newly continuous new camera path

Second, I use this newly generated camera path and the reconstructed mesh from colmap to render the corresponding depth map

Third, based on the above newly depth map, I use the count_nbs to compute the counts.npy for each new depth map (the tgt parameters are the newly generated camera path and depth map, the src parameters are the original camera path, and depth map ). [I notice that although your tat_eval_sets are not trained, every data mesh(e.g, truck) is reconstructed from the truck and then you choose some images from the truck to test, so it has not generate a new view image, but like a process to reconstruct a known image. I have tested on the provided datasets, the generated images have the same image in the original images file. I wonder if I have some misunderstanding? ]

Last, I use original images, newly generated depth map, newly generated count.npy to form a new test dataset and modify the tat_tracks to contain this data, and then runexp.py

I have visualized the generated camera path and have seen the rendered newly depth map, everything is normal, but the render new view images look bad. I can't figure out where I made mistake, hope you can give some advice, thanks~

btw, I also try the above process with the original images, depth maps, and count.npy, the generated image looks normal, but since this image is part of the original images, it seems when test on image that has used to reconstruct the mesh, it's normal, but when test on image generated from a newly generated depth map and camera path, the generated images are bad.

griegler commented 3 years ago

For the T&T test sequences I did only use the poses of the test views to generate the images. See here and here. But what you describe in the beginning is rendering a completely novel trajectory. The preprocessing steps sound reasonable. Did you then use get_eval_set_trk as it is called for example here?

visonpon commented 3 years ago

Yes, I want to render a completely novel camera path, and I useget_eval_set_tat instead of get_eval_set_trk

griegler commented 3 years ago

To render novel trajectories you should use get_eval_set_trk, that is what I have used for the visualizations.

visonpon commented 3 years ago

oh, I got it, after using theget_eval_set_trk, the new rendered images see normal, thanks~