Closed yimengli46 closed 2 years ago
Hi @yimengli46, in order to generate top down maps you can add TOP_DOWN_MAP
in the MEASUREMENTS
key in the task config here. You can do the same for pick and place task as well.
And you can use the same examples/objectnav_replay.py
to generate videos with TopDown view for human demonstrations. Let me know if you run into any issues.
Closing this issue. Please reopen the issue if you are still facing the problem.
Hi, I cannot reopen the issue so I will put my problem here.
I add TOP_DOWN_MAP
to the MEASUREMENTS
, and I run into the following error
File "/home/yimeng/Datasets/habitat-web-baselines/habitat_baselines/common/environments.py", line 53, in __init__
super().__init__(self._core_env_config, dataset)
File "/home/yimeng/Datasets/habitat-web-baselines/habitat/core/env.py", line 384, in __init__
self._env = Env(config, dataset)
File "/home/yimeng/Datasets/habitat-web-baselines/habitat/core/env.py", line 117, in __init__
dataset=self._dataset,
File "/home/yimeng/Datasets/habitat-web-baselines/habitat/tasks/registration.py", line 22, in make_task
return _task(**kwargs)
File "/home/yimeng/Datasets/habitat-web-baselines/habitat/tasks/nav/object_nav_task.py", line 277, in __init__
super().__init__(**kwargs)
File "/home/yimeng/Datasets/habitat-web-baselines/habitat/tasks/nav/nav.py", line 1153, in __init__
super().__init__(config=config, sim=sim, dataset=dataset)
File "/home/yimeng/Datasets/habitat-web-baselines/habitat/core/embodied_task.py", line 249, in __init__
entities_config=config,
File "/home/yimeng/Datasets/habitat-web-baselines/habitat/core/embodied_task.py", line 272, in _init_entities
), f"invalid {entity_name} type {entity_cfg.TYPE}"
AssertionError: invalid TOP_DOWN_MAP type TopDownMap
Traceback (most recent call last):
File "/home/yimeng/miniconda3/envs/habitat_web/lib/python3.6/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "/home/yimeng/miniconda3/envs/habitat_web/lib/python3.6/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/home/yimeng/Datasets/habitat-web-baselines/habitat_baselines/run.py", line 79, in <module>
main()
File "/home/yimeng/Datasets/habitat-web-baselines/habitat_baselines/run.py", line 40, in main
run_exp(**vars(args))
File "/home/yimeng/Datasets/habitat-web-baselines/habitat_baselines/run.py", line 75, in run_exp
execute_exp(config, run_type)
File "/home/yimeng/Datasets/habitat-web-baselines/habitat_baselines/run.py", line 60, in execute_exp
trainer.eval()
File "/home/yimeng/Datasets/habitat-web-baselines/habitat_baselines/common/base_trainer.py", line 111, in eval
checkpoint_index=ckpt_idx,
File "/home/yimeng/Datasets/habitat-web-baselines/habitat_baselines/il/env_based/il_trainer.py", line 517, in _eval_checkpoint
self.envs = construct_envs(config, get_env_class(config.ENV_NAME))
File "/home/yimeng/Datasets/habitat-web-baselines/habitat_baselines/utils/env_utils.py", line 102, in construct_envs
workers_ignore_signals=workers_ignore_signals,
File "/home/yimeng/Datasets/habitat-web-baselines/habitat/core/vector_env.py", line 157, in __init__
read_fn() for read_fn in self._connection_read_fns
File "/home/yimeng/Datasets/habitat-web-baselines/habitat/core/vector_env.py", line 157, in <listcomp>
read_fn() for read_fn in self._connection_read_fns
File "/home/yimeng/miniconda3/envs/habitat_web/lib/python3.6/multiprocessing/connection.py", line 250, in recv
buf = self._recv_bytes()
File "/home/yimeng/miniconda3/envs/habitat_web/lib/python3.6/multiprocessing/connection.py", line 407, in _recv_bytes
buf = self._recv(4)
File "/home/yimeng/miniconda3/envs/habitat_web/lib/python3.6/multiprocessing/connection.py", line 379, in _recv
chunk = read(handle, remaining)
Basically, the error is AssertionError: invalid TOP_DOWN_MAP type TopDownMap.
Do you have any idea how to solve it? And can I run the il_trainer.py in disk_based folder
The solution is to add TOP_DOWN_MAP
to the TASK.MEASUREMENTS
.
Hi @yimengli46, sorry about that. The SENSORS
comment was a mistake in my earlier reply. You are right we have to add TOP_DOWN_MAP
in measurements.
Hi, Ram. Thanks for open-sourcing the code. I was trying to run the trained imitation learning model. It seems that the video is built with egocentric views only.
I want to generate a top-down view map alongside the egocentric observations in the video, like what you have on the project webpage. Is there any easy way to do that? That would be great if I only needed to change a few parameters in the config file.