Closed waduheck closed 1 year ago
We apologize for the delay in delivering the dataset.
We hope to have a dataset for one of the environments up within 7 days.
If your question is about obtaining semantic mask dataset for an arbitrary environment in meta-world, you can obtain it through following method:
First, we accessed the XML within the meta-world environment and made all other objects transparent, leaving only one object, and then we obtained a depth image of the scene, and based on that, we obtained a mask (just make 0 if depth value is infinity, and 1 if not).
@jayLEE0301 Thanks for your sharing the code. You have mentioned that you have obtained depth images of these scenes. Could you please share the method to generate the depth image or release the depth image data? I want to generate similar dataset. Thank you!
Semantic images of Drawer, Soccer are contained to the released dataset.
We don't have depth image of the dataset, but it's true that the depth image is an intermediate byproduct of our process of obtaining the semantic mask.
To get the depth image, you can put depth=True as the input of the function when you call env.render, and you will get both rgb and depth images. (e.g. env.render( ... , depth=True)
Thanks for your replay, could you share the code of obtaining the semantic mask? Thank you.
thank you for your fabulous code. I have captured the png from the camera by
image7_4 = env.render(offscreen=True,camera_name="cam_7_4")
but then how can I get the semantic segmentation?