Closed puresky123 closed 2 years ago
The 30_test_vis contains the testing poses that we used for creating video. You can directly render it by yourself using the provided test_llff_downX.sh
and checkpoints (namely the 30_net_Coarse.pth
and 30_net_Fine.pth
), which doesn't need to retrain the model. You can also change 30_test_vis
to 30_val_vis
to see how refinement works on validation poses.
Thank you ! I'll close the issue.
Hi, I have another question. If I want to test the llff_refinement model, how can I get the i_locs.npz files which were contained in the folder named 30_test_vis. Thank you !
Use the provided warp.py
Thank you so much !
Hi, I'm very interested in your work ! But when I use command—bash scripts/test_llff_refine.sh to see the effect of the refinement model. It told me that I need a folder named 30_test_vis which your pretrained model is not provided. It seems that I need to retrain the model before refinement. Could you tell me how can I see the result by only using the pretrained model you provide? Thanks a lot !