Closed xinranliang closed 2 years ago
Hey, @xinranliang
I believe this is a bug because semantic labels should be generated only based on what can be observed in color sensor, while occluded objects shouldn't receive any semantic annotations?
Unfortunately, the RGB and semantic meshes are separate asset files and may not have the same geometry, so this assumption does not necessarily hold. I can't comment on this exact scene, but you may want to try loading both the mesh.obj
and semantic.obj
into another tool to validate that the geometry is equivalent in this region (maybe Blender).
You could also try loading this scene in the viewer.cpp (see Interactive Testing) application and toggling between RGB and semantic visualization modes as described by the UI help text. Please do let us know if it appears that there is an issue with the engine outside of the source assets.
Edit: Upon further investigation, we've noticed that some changes to scene loading since the previous stable release may have removed some automated coordinate system handling for Gibson assets which may be causing your issue. You should use an older, stable version of habitat-sim or specify the coordinate system settings for the assets in a StageConfig or SceneDatasetConfig as described in the docs, specifically by assigning a stage frame. We'll work on providing this config file for Gibson assets in the near future and update this thread when a recommended approach is available.
Hi @aclegg3 Thanks for response and clarification! I wonder what is the version number of habitat-sim
stable release? My current package is installed from main branch but it still says v0.2.1
. Or is there any instruction for installing stable version only?
Current version is 0.2.1. However, you may be using the nightly build which contains additional bleeding edge features and possible breaking changes such as this one.
Thanks for clarification and yes I believe -c aihabitat-nightly
commands is my issue here. At this point I will stick with existing semantic labels, and please keep me updated in this thread once another recommended approach is available, thanks!
Another issue related to this, I'm trying to follow scene semantic annotation tutorial in order to look at different objects existing in each scene. However, for every single scene in Gibson dataset, when I call code
for obj in sim.semantic_scene.objects:
obj.aabb
obj.aabb.sizes
obj.aabb.center
obj.id
obj.obb.rotation
obj.category.name()
obj.category.index()
It always return None
for each object and AttributeError: 'NoneType' object has no attribute 'aabb'
. I wonder is this an error only encountered by me and if there's any way to fix it? As this might be a big challenge for object detection and instance segmentation.
Also, is there more instance segmentation and object detection task setup in habitat simulator using semantic labels? I looked at examples/instance_segmentation
folder but there's limited tutorials about training detectron2 models on instance segmentation from dataset collected from habitat simulator. Thanks!
The first object will be None as objects
is a direct map from instance ID to object and 0 is for hole/void/background/etc. All other objects should be not None.
No, we don't have more examples.
Hi @xinranliang ,
I tried to run the command you mentioned above
python examples/example.py --scene Allensville --width 256 --height 256 --save_png --semantic_sensor --depth_sensor
I am running into this error :
AssertionError: ESP_CHECK failed: No Stage Attributes exists for requested scene 'Allensville' in currently specified Scene Dataset default
. Likely the Scene Dataset Configuration requested was not found and so a new, empty Scene Dataset was created. Verify the Scene Dataset Configuration file name used.
I created the dataset using the tools/gen_gibson_semantics.sh and the 3D scene graph data from iGibson
Is there anything else that needs to be done from my side to enable and get the semantic information?
Hi,
According to my logs, if you have Allensville_semantic.ply, Allensville.glb, Allensville.ids, Allensville.navmesh, Allensville.scn
for each scene with semantic annotations, and correctly modify path directory in both scene dataset config and task config, it should be able to replicate without reporting errors. I'm running based on v1.5.0
, I remember it does not work with later 2.x
versions.
Hope this could help!
Habitat-Sim version
v0.2.1
Habitat is under active development, and we advise users to restrict themselves to stable releases. Are you using the latest release version of Habitat-Sim? Your question may already be addressed in the latest version. We may also not be able to help with problems in earlier versions because they sometimes lack the more verbose logging needed for debugging.
Main branch contains 'bleeding edge' code and should be used at your own risk.
Docs and Tutorials
Did you read the docs? https://aihabitat.org/docs/habitat-sim/
Did you check out the tutorials? https://aihabitat.org/tutorial/2020/
Perhaps your question is answered there. If not, carry on!
❓ Questions and Help
Changes I made in
demo_runner.py
is saving semantic observation as a greyscale image instead of rgb image.Exact command I run:
RGBA obervation:
Semantic observation:
Expected behavior:
Looking at RGB observation and ground truth semantic label above, we should expect bottom left corner has no semantic label because there's a wall captured by camera. However, semantic labels give me object labels, which I guess belongs to the bed which is hidden behind the wall but appear in the 3D scene. I believe this is a bug because semantic labels should be generated only based on what can be observed in color sensor, while occluded objects shouldn't receive any semantic annotations?