Closed ghost closed 7 years ago
Hi, Dang,
Yes, it is the depth map provided by the dataset, the raw depth map. While I didn't check carefully what the values mean, I just imagine the depth map from kinect camera records distance measured by mm.
I've uploaded three mat files ( https://github.com/aimerykong/Recurrent-Scene-Parsing-with-Perspective-Understanding-in-the-loop/tree/master/depthSegRNN_NYUv2/sampleMatFiles) and modified the demo script. You can run demo_NYUv2.m https://github.com/aimerykong/Recurrent-Scene-Parsing-with-Perspective-Understanding-in-the-loop/blob/master/depthSegRNN_NYUv2/demo_NYUv2.m again to see how these maps look like.
Regards, Shu
On Thu, Jun 8, 2017 at 10:25 PM, Dang Kang notifications@github.com wrote:
Hi Shu Kong:
I am reading the NYUv2 training script. I am wondering where does currMat.datamat.depth mean? Is it the dense depth value in the unit of mm? That is, the "depths" field of "nyu_depth_v2_labeled.mat" multiplied by 1000?
Thanks and Regards,
Yours, Dang Kang
— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/aimerykong/Recurrent-Scene-Parsing-with-Perspective-Understanding-in-the-loop/issues/2, or mute the thread https://github.com/notifications/unsubscribe-auth/AGKZJGbp6uPmF4FMEUgpG4sD_zt-ave2ks5sCNc5gaJpZM4N06lk .
Hi, Shu:
Thanks very much for the help. It is really nice and I appreciate that. Will check it soon.
BTW. This year I have attended ICRA 2017 conference, and have noticed noticed one paper a bit similar to your work. However, that paper does not perform experiments on semantic segmentation tasks. I attach it in the email in case you are interested.
Thanks and Regards,
Yours, Dang.
On Sat, Jun 10, 2017 at 1:18 AM, Shu Kong notifications@github.com wrote:
Hi, Dang,
Yes, it is the depth map provided by the dataset, the raw depth map. While I didn't check carefully what the values mean, I just imagine the depth map from kinect camera records distance measured by mm.
I've uploaded three mat files ( https://github.com/aimerykong/Recurrent-Scene-Parsing-with- Perspective-Understanding-in-the-loop/tree/master/depthSegRNN_NYUv2/ sampleMatFiles) and modified the demo script. You can run demo_NYUv2.m https://github.com/aimerykong/Recurrent-Scene-Parsing-with-Perspective- Understanding-in-the-loop/blob/master/depthSegRNN_NYUv2/demo_NYUv2.m again to see how these maps look like.
Regards, Shu
On Thu, Jun 8, 2017 at 10:25 PM, Dang Kang notifications@github.com wrote:
Hi Shu Kong:
I am reading the NYUv2 training script. I am wondering where does currMat.datamat.depth mean? Is it the dense depth value in the unit of mm? That is, the "depths" field of "nyu_depth_v2_labeled.mat" multiplied by 1000?
Thanks and Regards,
Yours, Dang Kang
— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/aimerykong/Recurrent-Scene-Parsing-with-Perspective- Understanding-in-the-loop/issues/2, or mute the thread https://github.com/notifications/unsubscribe- auth/AGKZJGbp6uPmF4FMEUgpG4sD_zt-ave2ks5sCNc5gaJpZM4N06lk .
— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/aimerykong/Recurrent-Scene-Parsing-with-Perspective-Understanding-in-the-loop/issues/2#issuecomment-307448146, or mute the thread https://github.com/notifications/unsubscribe-auth/AXe0hazkRFjPlFwExcheC2TWMh-bzMGYks5sCX59gaJpZM4N06lk .
Hi, Dang,
Thanks you for the information that there is a similar paper. But I am afraid you didn't attach the paper:)
Regards, Shu
On Mon, Jun 12, 2017 at 7:21 AM, Dang Kang notifications@github.com wrote:
Hi, Shu:
Thanks very much for the help. It is really nice and I appreciate that. Will check it soon.
BTW. This year I have attended ICRA 2017 conference, and have noticed noticed one paper a bit similar to your work. However, that paper does not perform experiments on semantic segmentation tasks. I attach it in the email in case you are interested.
Thanks and Regards,
Yours, Dang.
On Sat, Jun 10, 2017 at 1:18 AM, Shu Kong notifications@github.com wrote:
Hi, Dang,
Yes, it is the depth map provided by the dataset, the raw depth map. While I didn't check carefully what the values mean, I just imagine the depth map from kinect camera records distance measured by mm.
I've uploaded three mat files ( https://github.com/aimerykong/Recurrent-Scene-Parsing-with- Perspective-Understanding-in-the-loop/tree/master/depthSegRNN_NYUv2/ sampleMatFiles) and modified the demo script. You can run demo_NYUv2.m https://github.com/aimerykong/Recurrent-Scene-Parsing-with-Perspective- Understanding-in-the-loop/blob/master/depthSegRNN_NYUv2/demo_NYUv2.m again to see how these maps look like.
Regards, Shu
On Thu, Jun 8, 2017 at 10:25 PM, Dang Kang notifications@github.com wrote:
Hi Shu Kong:
I am reading the NYUv2 training script. I am wondering where does currMat.datamat.depth mean? Is it the dense depth value in the unit of mm? That is, the "depths" field of "nyu_depth_v2_labeled.mat" multiplied by 1000?
Thanks and Regards,
Yours, Dang Kang
— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/aimerykong/Recurrent-Scene- Parsing-with-Perspective- Understanding-in-the-loop/issues/2, or mute the thread https://github.com/notifications/unsubscribe- auth/AGKZJGbp6uPmF4FMEUgpG4sD_zt-ave2ks5sCNc5gaJpZM4N06lk .
— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/aimerykong/Recurrent-Scene-Parsing-with-Perspective- Understanding-in-the-loop/issues/2#issuecomment-307448146, or mute the thread https://github.com/notifications/unsubscribe-auth/ AXe0hazkRFjPlFwExcheC2TWMh-bzMGYks5sCX59gaJpZM4N06lk .
— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/aimerykong/Recurrent-Scene-Parsing-with-Perspective-Understanding-in-the-loop/issues/2#issuecomment-307803986, or mute the thread https://github.com/notifications/unsubscribe-auth/AGKZJOgKs3AHbmDqEjKgFLWEcfP21wTJks5sDUlVgaJpZM4N06lk .
Hi Shu:
I guess github somewhat removed the Email attachment. I uploaded it here and please check.
Best Regards,
Yours, Dang Kang.
Hi, Dang,
Thank you for the link!
Yes, it looks very similar to the depth-aware gating module. Good to know people in robotics field also pay attention to how to use the depth information more effectively:)
Regards, Shu
On Mon, Jun 12, 2017 at 7:48 AM, Dang Kang notifications@github.com wrote:
Hi Shu:
I guess github somewhat removed the Email attachment. I uploaded it here and please check.
https://github.com/aimerykong/Recurrent-Scene-Parsing-with- Perspective-Understanding-in-the-loop/files/1068466/5104-3223.pdf
Best Regards,
Yours, Dang Kang.
— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/aimerykong/Recurrent-Scene-Parsing-with-Perspective-Understanding-in-the-loop/issues/2#issuecomment-307812261, or mute the thread https://github.com/notifications/unsubscribe-auth/AGKZJIlwf1lTB3ur0cwQO24jfWh3LUjEks5sDU-rgaJpZM4N06lk .
Hi Shu Kong:
I am reading the NYUv2 training script. I am wondering what does currMat.datamat.depth mean? Is it the dense depth value in the unit of mm? That is, the "depths" field of "nyu_depth_v2_labeled.mat" multiplied by 1000?
Thanks and Regards,
Yours, Dang Kang