Closed kontramind closed 3 years ago
Hi! Thank you, I'm glad you find it useful!
rendered using PBRT instead of Blender's Cycle
Nice! I had been considering PBRT too. How was your experience?
Are the occlusions and contact edges derived from depth data only; e.g. by treating depth data as an image and computing gradients, or there is more to the story?
Yes, the edges are derived by computing the gradient on the depth. It involved some trial and error for our dataset to find the correct parameters. Unfortunately, I do not have the pre-processing code at this time. For contact edges, I believe I used a simple heuristic based on height - anything less than X m from the ground was labelled as a contact edge. Not an ideal solution, but was sufficient for the ClearGrasp dataset.
@kontramind As I mentioned in https://github.com/Shreeyak/cleargrasp/issues/21, you could also take a gradient on the instance masks of the objects to get a boundary. This will give you perfect outlines for the objects and you can isolate opaque from transparent objects in this manner.
Update: Found the script used for creating the outlines. Have added it to the repo here: https://github.com/Shreeyak/cleargrasp/blob/master/z-ignore-scripts-helper/data_processing_script.py
Dear,
First of all, thank you for sharing this amazing work. I find it valuable on so many levels.
I'm trying to reproduce training, but on a custom data set. However, I'm still struggling with how to perform boundary detection.
The dataset is synthetic one (rendered using PBRT instead of Blender's Cycle engine). Could you please share more info on how the boundaries are obtained? I do have ground truth for depth images. Are the occlusions and contact edges derived from depth data only; e.g. by treating depth data as an image and computing gradients, or there is more to the story?
Many thanks for your time.