facebookresearch / habitat-lab

A modular high-level library to train embodied AI agents across a variety of tasks and environments.
https://aihabitat.org/
MIT License
1.89k stars 474 forks source link

ZeroDivisionError:SPL measure alwary get this problem when using Gibsonv2 #1139

Open GuoPingPan opened 1 year ago

GuoPingPan commented 1 year ago

I got the error below when using Gibson-v2 Dataset in habitat to generate some datas,

Traceback (most recent call last):
  File "test.py", line 78, in <module>
    env.run()
  File "/home/hello/pgp_eai/ANM/dataset/utils/dataset_env.py", line 149, in run
    obs = self.reset()
  File "/home/hello/pgp_eai/ANM/dataset/utils/dataset_env.py", line 63, in reset
    obs = super().reset()
  File "/home/hello/anaconda3/envs/eai/lib/python3.8/contextlib.py", line 75, in inner
    return func(*args, **kwds)
  File "/home/hello/pgp_eai/habitat-lab/habitat-lab/habitat/core/env.py", line 450, in reset
    return self._env.reset()
  File "/home/hello/pgp_eai/habitat-lab/habitat-lab/habitat/core/env.py", line 288, in reset
    self._task.measurements.reset_measures(
  File "/home/hello/pgp_eai/habitat-lab/habitat-lab/habitat/core/embodied_task.py", line 165, in reset_measures
    measure.reset_metric(*args, **kwargs)
  File "/home/hello/pgp_eai/habitat-lab/habitat-lab/habitat/tasks/nav/nav.py", line 598, in reset_metric
    self.update_metric(  # type:ignore
  File "/home/hello/pgp_eai/habitat-lab/habitat-lab/habitat/tasks/nav/nav.py", line 628, in update_metric
    self._start_end_episode_distance
ZeroDivisionError: float division by zero

I find this error is because the metric distance_to_target from the Measure class DistanceToGoal as below,

    def update_metric(self, episode, task, *args: Any, **kwargs: Any):
        current_position = self._sim.get_agent_state().position
        distance_to_target = task.measurements.measures[
            DistanceToGoal.cls_uuid
        ].get_metric()

        ep_soft_success = max(
            0, (1 - distance_to_target / self._start_end_episode_distance)
        )

        self._agent_episode_distance += self._euclidean_distance(
            current_position, self._previous_position
        )

        self._previous_position = current_position

        self._metric = ep_soft_success * (
            self._start_end_episode_distance
            / max(
                self._start_end_episode_distance, self._agent_episode_distance
            )
        )

so I try to prevent it before reset,so I added some code just as below, but I still meet the ZeroDivisionError mentioned above.

num_episode_count = 0
count = 0
while num_episode_count < self.episodes_per_scene:
        rgb_seq, depth_seq, pose_seq, action_seq = [], [], [], []

        current_episode = self.habitat_env._episode_iterator.episodes[count]
        count += 1

          # avoid distance_to_goal = 0
        self.habitat_env._task.measurements.measures['distance_to_goal'].reset_metric(current_episode)
        self.habitat_env._task.measurements.measures['success'].reset_metric(current_episode, self.habitat_env._task)
        distance = self.habitat_env._task.measurements.measures['distance_to_goal'].get_metric()
        if distance == 0.0:
                _ = next(self.habitat_env.episode_iterator)
                self.habitat_env.reconfigure(self.habitat_env._config)
                continue
        self.habitat_env._task.measurements.measures['spl'].reset_metric(current_episode, self.habitat_env._task)

        obs = self.reset()
        rgb_seq.append(obs['rgb'])
        depth_seq.append(obs['depth'])

        goal_position = self.habitat_env.current_episode.goals[0].position
        start_position = self.habitat_env.current_episode.start_position

        while not self.habitat_env.episode_over:
                  best_action = self.shortest_path_follower.get_next_action(goal_position)

                  if best_action is None or best_action == 0:
                      break

                  obs, rew, done, info = self.step(best_action)

                  rgb_seq.append(obs['rgb'])
                  depth_seq.append(obs['depth'])
                  action_seq.append(best_action)
                  pose_seq.append(self.get_gt_pose_change())

        # step threshold for episode
        if self.habitat_env._elapsed_steps < self.min_steps_per_episode:
            continue

            ...

But if the current_episode is the same (I had checked the episode id), ZeroDivisionError might happen in

self.habitat_env._task.measurements.measures['spl'].reset_metric(current_episode, self.habitat_env._task)

which is before obs = self.reset(), but the fact is that the above spl check is pass and got ZeroDivisionError in self._task.measurements.update_measures(). What's more I print the distance_to_goal in the beginning of an episode and find that there are so many distance_to_goal=inf.

So I am quite comfused, please git me some advice, now I can just ignore this error by pop the spl out,

config.habitat.task.measurements.pop('spl')
kozhukovv commented 6 months ago

I Have same problem when vectorize pointnav environment