dontLoveBugs / FCRN_pytorch

Pytorch Implementation of Deeper Depth Prediction with Fully Convolutional Residual Networks
64 stars 15 forks source link

訓練集格式/Training Dataset Format #11

Open stevenlee168 opened 2 years ago

stevenlee168 commented 2 years ago

您好,我想請教一下,就是在訓練過程我們訊練檔的 input 是 2 張圖 (RGB圖、RGBD 深度圖),output 為 RGBD 深度圖。 以 NYU Dataset 來講,這樣我要先轉換 mat 檔案成 RGB圖、RGBD 深度圖,然後才可以訓練嗎?還是只要拿 mat 檔案就好? 謝謝!

Hello, I would like to ask. During the training process, the input of our training file is two images (RGB image, RGBD depth image), and the output is RGBD depth image. In the case of NYU Dataset, can I convert mat files into RGB images and RGBD depth images before training? Or just take the mat file? Thanks!

zlifd commented 10 months ago

hi, may i know that have you solve the problem yet? i download the Kitti dataset from torchvision.datasets.Kitti(), is that a correct way for depth map estimation training? thx

zlifd commented 2 months ago

hi, may i know that have you solve the problem yet? i download the Kitti dataset from torchvision.datasets.Kitti(), is that a correct way for depth map estimation training? thx

Updated: Maybe we can take a reference from below: https://github.com/liviniuk/DORN_depth_estimation_Pytorch/blob/master/create_nyu_h5.py