I am validating your pretrained model on GSO dataset, but I cannot get as high PSNR as reported in your paper due to the camera distance & object scale mismatch with my rendered data. May I know the parameters you have used in rendering GSO and Omni3D so that I can get a better metric on your method?
Btw, I have tried to align using --scale but there is still a subtle difference. Also, I have noticed that your output rendering scale is also not aligned with the zero123++ v1.2 prediction (as shown in the image below. Left: zero123++ v1.2; right: yours)
Since you have also mentioned in #66 that you are using mixed fov=30 & fov=50 for training, will this result in a random output scale of the object?
Great work!
I am validating your pretrained model on GSO dataset, but I cannot get as high PSNR as reported in your paper due to the camera distance & object scale mismatch with my rendered data. May I know the parameters you have used in rendering GSO and Omni3D so that I can get a better metric on your method?
Btw, I have tried to align using
--scale
but there is still a subtle difference. Also, I have noticed that your output rendering scale is also not aligned with the zero123++ v1.2 prediction (as shown in the image below. Left: zero123++ v1.2; right: yours)Since you have also mentioned in #66 that you are using mixed fov=30 & fov=50 for training, will this result in a random output scale of the object?