Closed PouyaNF closed 2 years ago
Yes, we filled the depth before training.
would you please share the code for filling the sparse depth of Kitti? I used the NYU depth_colorization but the filled depth can be better I think. Did you use default parameters? how can we get the high-quality (16-bit) depth map? this is the code I used for colorization : https://gist.github.com/ialhashim/be6235489a9c43c6d240e8331836586a#file-fill_depth_colorization-py
and I used it this way : rgb = Image.open(rgb_path).convert('RGB') depth_png = np.array(Image.open(depth_path), dtype=int) depth = depth_png.astype(np.float32) / 256. depth = depth_png.astype(np.float32) data = {'img': rgb, 'depth': depth} image_data = rgb.convert('L') image_gray_arr = np.array(image_data) data['depth_interp'] = fill_depth_colorization(image_gray_arr, depth) data['depth_interp'] = (data['depth_interp']).astype(np.float32)
if I comment depth = depth_png.astype(np.float32) / 256
does it make any improvement of the result?
Dear Junjie, please if it is possible for you, share your work on the Kitti dataset as well. especially the way you complete sparse depth and save it. my results on Kitti dataset are not interpretable and it generates an inappropriate mask.
best wishes
On Mon, Jun 27, 2022 at 5:53 AM Junjie Hu @.***> wrote:
Yes, we filled the depth before training.
— Reply to this email directly, view it on GitHub https://github.com/JunjH/Visualizing-CNNs-for-monocular-depth-estimation/issues/13#issuecomment-1166807689, or unsubscribe https://github.com/notifications/unsubscribe-auth/AQPXFXIN3USG6KVD6WARDPLVREQTHANCNFSM5YRJEV2Q . You are receiving this because you authored the thread.Message ID: <JunjH/Visualizing-CNNs-for-monocular-depth-estimation/issues/13/1166807689 @github.com>
in your paper, you mentioned using the NYU-v2 dataset to interpolate pixels with missing depth. if you do that while training, it gets massive amounts of time! please guide me on how to solve this? did you fill the depth before training?