Closed ahccom closed 5 years ago
Thanks for your interest in this work! The input image size can be changed as long as the camera intrinsic parameters are changed accordingly. You can take a look at the example code. The core part is how to construct the coat function according to the paper. If I have time, I will provide an OpenCV based code that is easier to understand.
Training the network is quite straightforward, no fancy tricks. All details are provided in the paper. As far as I think, the main difficulty is to collect the dataset. You can download it from DeMoN. Also, it takes days to train.
Hello. Thanks for the code. Did you undistort the image of rgbd-slam dataset ? I am doing a learning based mapping project now. A question is how to adjust the image distortion. To pre-undistort before training or consider the distortion parameter during warpping ? Besides, as the size of image in different dataset differs, did you resize the image and intrinsic to a canonical size ? Or manually switch between different dataset when training ? Thanks !
Dear, Thanks for your interest :) I assume the images from TUM RGB-D DATASET are undistorted. During the training and testing, images are resized to the same resolution (320×256). The intrinsic parameters are changed accordingly. Regards, Kaixuan
Good job! But my image size is (372,240),how could I use the network to fitting it? Or how could I train the network on my dataset? Looking forward to your reply, thanks.