Open Eku127 opened 1 month ago
Hey @Eku127
Glad you are finding the platform useful. I'll tackle these questions one at a time, feel free to ask further questions.
I currently have access to the bounding box of an object, such as a bed from the MP3D or HM3D datasets. My goal is to place an object on the surface of these objects.
I suggest that you use the snap_down util from habitat-lab for this. I would:
stage_id
because there are no objects. If you did have ManagedObjects in the scene, then you would need to know the object_id of the object you are snapping onto. For that, see the Receptacle logic for ReplicaCAD and HSSD.what is the best way to retrieve the floor height?
Multi-level handling is not very concrete in the current implementation. The best approaches I've seen involve sampling the navmesh and then clustering the points to get estimates of the floor heights.
We are releasing a semantic region annotation format which includes floor and ceiling height extrusions. That may be useful to make this more concrete.
In any case, clever use of the navmesh points is the best bet currently.
how to get the navigational topdown map of the second floor?
The mapping code takes a vertical slice with some margin as input and uses navmesh sampling to create an occupancy grid. Once you solve the floor identification problem you can pass the desired vertical chunks into the mapping code.
A naive approach to do all the above may be to take many slices of the navmesh using the vertical bounds and check which one(s) have the maximum number of valid snap points. For example, you check is_navigable for each point in the grid and then count the total number.
Hey @Eku127
Glad you are finding the platform useful. I'll tackle these questions one at a time, feel free to ask further questions.
- sampling placement locations for an object on the furniture of a scanned mesh.
I currently have access to the bounding box of an object, such as a bed from the MP3D or HM3D datasets. My goal is to place an object on the surface of these objects.
I suggest that you use the snap_down util from habitat-lab for this. I would:
- sample a point from the top of the bounding box
- use snap_down to test if that point projects vertically to a valid placement location on the furniture.
- repeat until a valid point is found Note that the "support_id" for scanned scenes is
stage_id
because there are no objects. If you did have ManagedObjects in the scene, then you would need to know the object_id of the object you are snapping onto. For that, see the Receptacle logic for ReplicaCAD and HSSD.
Thank you for the quick response! I’m aiming to place rigid objects on chairs and beds within the scanned scenes. As you pointed out, since all mesh nodes share the same ID (stage_id), the snap_down
function currently can’t distinguish between surfaces like the scanned floor and a scanned bed.
However, I think I can try using both the bounding box of the object and the top-down navigation map to define the xz plane for sampling, then use the snap_down
function for further positioning. I will give a try :p
Here I want to place the cheezit box on the cair but fail.
It works with the snap_down function and using the topdown map to ensure valid placement!
Another question: In the Replica dataset, is it also limited to only scanned data, or is it possible to obtain detailed semantic information and categories for each object, similar to the HSSD dataset?
Replica dataset does contain semantic meshes. Once loaded these will be used for the SemanticSensor and can be queried from the SemanticScene within Simulator.
See the flat shaded multi-color renderings on the main page to get a feeling for the annotations there: https://github.com/facebookresearch/Replica-Dataset
Replica dataset does contain semantic meshes. Once loaded these will be used for the SemanticSensor and can be queried from the SemanticScene within Simulator.
See the flat shaded multi-color renderings on the main page to get a feeling for the annotations there: https://github.com/facebookresearch/Replica-Dataset
You are right! When I skip the Nonetype
objects then I can get the semantic and category info of all the objects in Replica. However, I am encountering several issues when working with the Replica dataset.
When I load the Replica room 0 dataset using the following paths:
scene_path
: Replica-Dataset/Replica_original/room_0/habitat/mesh_semantic.ply
(for sim_cfg.scene_id
)
scene_config_path
: Replica_original/replica.scene_dataset_config.json
(for sim_cfg.scene_dataset_config_file
)
I modified the configuration in replica.scene_dataset_config.json
to set up: [0, 1, 0]
and front: [0, 0, -1]
. With these adjustments, the scene renders perfectly. However, when I attempt to add an object, it penetrates the scene, and the function sutils.snap_down(sim, object, [habitat_sim.stage_id])
returns False
, so I can not place the object correctly . I have confirmed that physics is enabled in the simulation. Is there any specific adaptation required for this Replica dataset?
Further, when I refer to this issue https://github.com/facebookresearch/habitat-sim/issues/2042#issuecomment-1479887692, I changed the scene_id
loading path to Replica-Dataset/Replica_original/room_0
. I noticed that I am unable to modify the coordinates, even after adjusting the up
and front
configuration in replica.scene_dataset_config.json
. Furthermore, the scene appears metallic in color, as described in https://github.com/facebookresearch/habitat-sim/issues/2335.
All the test code work well in MP3D and HM3D dataset. How should I load the Replica dataset correctly and place the object? If this follow-up goes beyond the scope of the original issue, I am more than happy to open a new issue for further discussion.
Habitat-Sim version
v0.3.1
Docs and Tutorials
Did you read the docs? https://aihabitat.org/docs/habitat-sim/
Yes
Did you check out the tutorials? https://aihabitat.org/tutorial/2020/
Yes
❓ Questions and Help
Hi, thank you for your great work on the Habitat simulation platform! I have a couple of questions regarding object placement:
I currently have access to the bounding box of an object, such as a bed from the MP3D or HM3D datasets. My goal is to place an object on the surface of these objects. Following the bounding box sampling tutorial, I need to identify the plane or surface of the object. Is there an API to obtain the height information of the mesh points within the specified bounding box? If this is possible, I could use a histogram to determine the primary height interval (like to use it in a pointcloud), which should correspond to the object's surface.
Second, what is the best way to retrieve the floor height? Is it possible to query this by level information? I am currently using
height = sim.pathfinder.get_bounds()[0][1]
, and how to get the navigational topdown map of the second floor? because for some scene level information is missing but contain multiple floors.Thank you for your guidance on these questions!