facebookresearch / habitat-lab

A modular high-level library to train embodied AI agents across a variety of tasks and environments.
https://aihabitat.org/
MIT License
1.93k stars 483 forks source link

Regarding viewpoints in ObjectNav task #345

Closed saimwani closed 4 years ago

saimwani commented 4 years ago

❓ Questions and Help

Hi,

Could you please briefly describe how the viewpoints were chosen for the object goals in the ObjectNav task? Specifically, I have these questions in mind:

  1. How is proximity quantified with respect to object size?
  2. How was the number of viewpoints for an object decided?

I could only find this in the documentation. If this is documented elsewhere, please point me out to it.

Thanks!

joel99 commented 4 years ago

To add on to this question, I have the following top down map which suggests view points (light red) can be far from the object. The code suggests that success is quantified by distance to the closest of these view points - does this mean the agent can be quite far from the target and still succeed? image

dhruvbatra commented 4 years ago

CC: @Skylion007 and @mathfac

mathfac commented 4 years ago
  1. For ObjectNav Task the view points are generated with 1 meter distance to bounding box of goal objects.
  2. Number of view points is proportional to 1m "success area" (light red) around the objects and forms a grid.
  3. Yes, the agent can succeed if call STOP with 1m radius around object's bounding box.
finnBsch commented 1 month ago

Hi, As a followup on this: I am trying to build my own objectnav dataset, for which I need the geodesic distance per generated episode. However given that most objects are not navigable, I need to generate viewpoints to compute the distance to. Could you provide some more details on how these viewpoints are generated, or how the object nav episodes are generated in general?