Closed MisEty closed 1 year ago
Hi,
regarding 1: Do I understand correctly that you have mapped the RGB image to the depth image, which creates holes? I would recommend to do it the other way around: Project the depth image onto the RGB image (project the depth map with the depth sensor intrinsics, re-project it with the RGB camera intrinsics). In case you cannot do this: There is no technical reason that your approach would not work. But your RGB image has severe artefacts that the neural net might overfit to.
regarding 2: Not sure what is happening, but I think it has to do with the depth map having holes. Invalid depth pixels should be set to zero. Could it be that there are images in your dataset with only invalid depth pixels? I assume some edge case is not captured in the DSAC* code. It looks like some frames are working, and others are not.
Best, Eric
Hello, I try to train DSAC* on my data captured by kinect v2 and I have some questions: