Open NIRVANALAN opened 4 years ago
Hello, The predicted depth map is a quarter of the input image in each dimension. The camera intrinsic is corresponding to the input image.
🤩 thanks for your elaboration!
On Wed, Sep 30, 2020 at 18:53 Todd-Qi notifications@github.com wrote:
Hello, The predicted depth map is a quarter of the input image in each dimension. The camera intrinsic is corresponding to the input image.
— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/xy-guo/MVSNet_pytorch/issues/16#issuecomment-701315634, or unsubscribe https://github.com/notifications/unsubscribe-auth/AFSG6W2CGJSL3AU4ZOOAFQ3SIME2BANCNFSM4R65JHPQ .
-- Sincerely, Yushi
Hello, The predicted depth map is a quarter of the input image in each dimension. The camera intrinsic is corresponding to the input image.
hi, when training, why you not do intrinsics[:2, :] /= 4
in Dataset Class, but do in eval.py?
@UestcJay if you have a closer look at the camera .txt files of the training and evaluation set respectively, you'll find that for the training dataset, the camera intrinsics were already transformed according to intrinsics[:2, :] /=4
before being stored in the .txt files. For the validation set, this is not the case. Hence, this correction must only be done through the data loader during validation. As a take-away, you'll most likely have to add intrinsics[:2, :] /=4
if you're loading your own data
Hi! I am trying to do inference on my own dataset. I wonder why you do
intrinsics[:2, :] /= 4
in Dataset Class?