Open huahangc opened 1 year ago
Rigorously speaking, there is no ground truth of depth. As mentioned in the paper, those depths in the dataset are estimated via STTR-light.
Thanks for your replying.
Can I compare the rendered depth map with thoses in the dataset? How can I obtain the rendered depth map?
@darthandvader https://github.com/med-air/EndoNeRF/blob/2d4546f58970b7cb3bb2465daee6c36c4f68f3cb/run_endonerf.py#L421 this variable means the 1/depth
, i recommend use the 1/disp
rather use the depth
directly.
Rigorously speaking, there is no ground truth of depth. As mentioned in the paper, those depths in the dataset are estimated via STTR-light.
Hi, I have a simple question about the GT depth map. Why can't you get the GT depth map directly with binocular video? Is it because the error is too large or something else? OR is it feasible to use this way to obtain GT depth map in this task? Thank you
Rigorously speaking, there is no ground truth of depth. As mentioned in the paper, those depths in the dataset are estimated via STTR-light.
Hi, I have a simple question about the GT depth map. Why can't you get the GT depth map directly with binocular video? Is it because the error is too large or something else? OR is it feasible to use this way to obtain GT depth map in this task? Thank you
I don't fully understand your question but wonder if there is a way to obtain the real "GT depth" from binocular videos? I mean the real depths, not the estimated ones. IMO, the only way is to use depth sensors, which are not equipped with most endoscopes.
Rigorously speaking, there is no ground truth of depth. As mentioned in the paper, those depths in the dataset are estimated via STTR-light.
Hi, I have a simple question about the GT depth map. Why can't you get the GT depth map directly with binocular video? Is it because the error is too large or something else? OR is it feasible to use this way to obtain GT depth map in this task? Thank you
I don't fully understand your question but wonder if there is a way to obtain the real "GT depth" from binocular videos? I mean the real depths, not the estimated ones. IMO, the only way is to use depth sensors, which are not equipped with most endoscopes.
Thank you for your reply. Yes, the best way is to use the depth sensors. But it is impossible in such scenarios. So, for the binocular video, I mean we can obtain the real depth by stereo matching if we have the camera parameters, right?
Rigorously speaking, there is no ground truth of depth. As mentioned in the paper, those depths in the dataset are estimated via STTR-light.
Hi, I have a simple question about the GT depth map. Why can't you get the GT depth map directly with binocular video? Is it because the error is too large or something else? OR is it feasible to use this way to obtain GT depth map in this task? Thank you
I don't fully understand your question but wonder if there is a way to obtain the real "GT depth" from binocular videos? I mean the real depths, not the estimated ones. IMO, the only way is to use depth sensors, which are not equipped with most endoscopes.
Thank you for your reply. Yes, the best way is to use the depth sensors. But it is impossible in such scenarios. So, for the binocular video, I mean we can obtain the real depth by stereo matching if we have the camera parameters, right?
Stereo matching is still a way to estimate the depth. And it requires correspondence information on the image pairs, which is not available in our case.
Rigorously speaking, there is no ground truth of depth. As mentioned in the paper, those depths in the dataset are estimated via STTR-light.
Hi, I have a simple question about the GT depth map. Why can't you get the GT depth map directly with binocular video? Is it because the error is too large or something else? OR is it feasible to use this way to obtain GT depth map in this task? Thank you
I don't fully understand your question but wonder if there is a way to obtain the real "GT depth" from binocular videos? I mean the real depths, not the estimated ones. IMO, the only way is to use depth sensors, which are not equipped with most endoscopes.
Thank you for your reply. Yes, the best way is to use the depth sensors. But it is impossible in such scenarios. So, for the binocular video, I mean we can obtain the real depth by stereo matching if we have the camera parameters, right?
Stereo matching is still a way to estimate the depth. And it requires correspondence information on the image pairs, which is not available in our case.
I see, thank you very much for your reply.
How did you get the groundtruth of depth in the dataset?