dsikar / cleargrasp

Cloned from git@github.com:Shreeyak/cleargrasp.git
Apache License 2.0
2 stars 0 forks source link

Boundary detection question #1

Closed kontramind closed 3 years ago

kontramind commented 3 years ago

Dear,

First of all, thank you for sharing this amazing work. I find it valuable on so many levels.

I'm trying to reproduce training, but on a custom data set. However, I'm still struggling with how to perform boundary detection.

The dataset is synthetic one (rendered using PBRT instead of Blender's Cycle engine). Could you please share more info on how the boundaries are obtained? I do have ground truth for depth images. Are the occlusions and contact edges derived from depth data only; e.g. by treating depth data as an image and computing gradients, or there is more to the story?

Many thanks for your time.

dsikar commented 3 years ago

Hi Aleksandar,

No problem, I am glad the work I put into ClearGrasp to make it work with the Zivid One+ can be useful.

The boundaries are generated with the synthetic dataset and correspond to the labels, keeping in mind that the output of a convolutional autoencoder is 2D, so the label is also 2D.

The path to the training dataset for the occlusion boundary detection convolutional autoencoder is set in:

./pytorch_networks/occlusion_boundaries/config/config.yaml

An example boundary label (image) can be downloaded from here:

https://github.com/dsikar/cleargrasp/blob/master/data/sample_dataset/synthetic-val/square-plastic-bottle-val/outlines/000000000-outlineSegmentation.png

Note you will have to change pixel values to visualise the actual boundary in the example given, though if you plot a histogram of pixel values you should see the outline and background in different bins depending on your binning parameters. So recapping, the label (boundary image) is generated with the synthetic dataset and that is what is used to train the occlusion boundary detection convolutional auto-encoder,

Cheers

Daniel

On Sun, 8 Aug 2021 at 21:25, Aleksandar Babic @.***> wrote:

Dear,

First of all, thank you for sharing this amazing work. I find it valuable on so many levels.

I'm trying to reproduce training, but on a custom data set. However, I'm still struggling with how to perform boundary detection.

The dataset is synthetic one (rendered using PBRT instead of Blender's Cycle engine). Could you please share more info on how the boundaries are obtained? I do have ground truth for depth images. Are the occlusions and contact edges derived from depth data only; e.g. by treating depth data as an image and computing gradients, or there is more to the story?

Many thanks for your time.

— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/dsikar/cleargrasp/issues/1, or unsubscribe https://github.com/notifications/unsubscribe-auth/AABYYSTXF66JOWTC5NQQGQTT33R33ANCNFSM5BY2M2YA . Triage notifications on the go with GitHub Mobile for iOS https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675 or Android https://play.google.com/store/apps/details?id=com.github.android&utm_campaign=notification-email .