bdaiinstitute / vlfm

The repository provides code associated with the paper VLFM: Vision-Language Frontier Maps for Zero-Shot Semantic Navigation (ICRA 2024)
http://naoki.io/portfolio/vlfm.html
MIT License
194 stars 14 forks source link

How to visualize the exploration process including the environment, robot, target and planned path? #26

Closed DrDengDi closed 5 months ago

DrDengDi commented 6 months ago

After running

python -m vlfm.run 

The terminal shows this error:

AttributesManagerBase.h(357)::buildAttrSrcPathsFromJSONAndLoad : No Glob path result for ./data/scene_datasets/hm3d_v0.2/val/00891-cvZr5TUy5C5/.basis.scene_instance.json
[16:11:54:313870]:[Metadata] AttributesManagerBase.h(357)::buildAttrSrcPathsFromJSONAndLoad : No Glob path result for ./data/scene_datasets/hm3d_v0.2/val/00894-HY1NcmCgn3n/.basis.scene_instance.json

and image

How to solve this error? May I know how to get the visualization of the environment, robot and path as shown in the README's figure, i.e., what commands to run? image

Minor comments in the README

inconsistent file name

HM3D_OBJECTNAV is defined as objectnav_hm3d_v2.zip, but unzip objectnav_hm3d_v1 file and the folder name is $DATA_DIR/datasets/objectnav/hm3d/v1

HM3D_OBJECTNAV=https://dl.fbaipublicfiles.com/habitat/data/datasets/objectnav/hm3d/v2/objectnav_hm3d_v2.zip
wget $HM3D_OBJECTNAV &&
unzip objectnav_hm3d_v1.zip &&
mkdir -p $DATA_DIR/datasets/objectnav/hm3d  &&
mv objectnav_hm3d_v1 $DATA_DIR/datasets/objectnav/hm3d/v1 &&
rm objectnav_hm3d_v1.zip

solution: change HM3D_OBJECTNAV from v2 to v1 or change the unzip file and folder to v2 and update the predefined data file to v2 and update the hardcoded data/datasets/objectnav/hm3d/v1/{split}/{split}.json.gz to v2.

No instruction to run server

Connection error after running

python -m vlfm.run

HTTPConnectionPool(host='localhost', port=12182): Max retries exceeded with url: /blip2itm (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x75083089c8e0>: Failed to establish a new connection: [Errno 111] Connection refused'))

solution: run

./scripts/launch_vlm_servers.sh

before

python -m vlfm.run
Neal2020GitHub commented 5 months ago

Hi @DrDengDi,

When I run python -m vlfm.run, it raises "FileNotFoundError: Could not find dataset file data/datasets/objectnav/hm3d/v1/val".

Have you encountered such issue? It seems that the data is stored at the correct location.

Cheers!

DrDengDi commented 5 months ago

Hi @Neal2020GitHub To solve this issue, I ran the debugger from vlfm/run.py and added a breakpoint at ~/anaconda3/envs/vlfm/lib/python3.9/site-packages/habitat/datasets/pointnav/pointnav_dataset.py as shown in the image to check what is difference between the data_path and the stored file path. image

DrDengDi commented 5 months ago

@naokiyokoyama Could you tell me how to get the visualization of the environment, robot and path as shown in the README's figure, i.e., what commands to run?

Neal2020GitHub commented 5 months ago

Hi @Neal2020GitHub To solve this issue, I ran the debugger from vlfm/run.py and added a breakpoint at ~/anaconda3/envs/vlfm/lib/python3.9/site-packages/habitat/datasets/pointnav/pointnav_dataset.py as shown in the image to check what is difference between the data_path and the stored file path. image

Thank you! It works now.

naokiyokoyama commented 5 months ago

@DrDengDi to generate the videos, you will need to add "disk" to habitat_baselines.eval.video_option ("['disk']")

You will also need to set habitat_baselines.video_dir to a directory of your choosing. It will be created if it does not exist.

What will be visualized is the ground truth shortest path (green), the agent's trajectory so far (blue), and locations of goal object instances (red).

More config options for habitat_baselines can be found here: https://github.com/facebookresearch/habitat-lab/blob/main/habitat-baselines/habitat_baselines/config/default_structured_configs.py

naokiyokoyama commented 5 months ago

@DrDengDi Regarding the 'error' that you posted, I believe that you can ignore that. I have also updated the README per your suggestions, thank you.

Neal2020GitHub commented 5 months ago

When I tried to generate videos, it raised TypeError: unsupported format string passed to numpy.ndarray.__format__.

image

Any solutions about it? Thanks!

naokiyokoyama commented 5 months ago

I have pushed a fix in d79d7f1d7e047f34e048f61ce34ad85379dbaa7c, thanks for pointing it out.

PEACHTTT commented 2 months ago

@DrDengDi Hi,when I run python -m vlfm.run, it raises issues below.Could you tell me how to fix this ? I have tried the debugger,but it seems my path is correct. I stored the datasets in /home/name/vlfm-main/data/datasets/objectnav/hm3d/v1/val. And I also tried copy the datasets to root directory,but meaningless.

**/*** 2024-07-05 16:58:53,937 Initializing dataset ObjectNav-v1 Error executing job with overrides: [] Traceback (most recent call last): File "/home/tcy/anaconda3/envs/vlfm/lib/python3.9/runpy.py", line 197, in _run_module_as_main return _run_code(code, main_globals, None, File "/home/tcy/anaconda3/envs/vlfm/lib/python3.9/runpy.py", line 87, in _run_code exec(code, run_globals) File "/home/tcy/vlfm-main/vlfm/run.py", line 59, in main() File "/home/tcy/anaconda3/envs/vlfm/lib/python3.9/site-packages/hydra/main.py", line 94, in decorated_main _run_hydra( File "/home/tcy/anaconda3/envs/vlfm/lib/python3.9/site-packages/hydra/_internal/utils.py", line 394, in _run_hydra _run_app( File "/home/tcy/anaconda3/envs/vlfm/lib/python3.9/site-packages/hydra/_internal/utils.py", line 457, in _run_app run_and_report( File "/home/tcy/anaconda3/envs/vlfm/lib/python3.9/site-packages/hydra/_internal/utils.py", line 223, in run_and_report raise ex File "/home/tcy/anaconda3/envs/vlfm/lib/python3.9/site-packages/hydra/_internal/utils.py", line 220, in run_and_report return func() File "/home/tcy/anaconda3/envs/vlfm/lib/python3.9/site-packages/hydra/_internal/utils.py", line 458, in lambda: hydra.run( File "/home/tcy/anaconda3/envs/vlfm/lib/python3.9/site-packages/hydra/internal/hydra.py", line 132, in run = ret.return_value File "/home/tcy/anaconda3/envs/vlfm/lib/python3.9/site-packages/hydra/core/utils.py", line 260, in return_value raise self._return_value File "/home/tcy/anaconda3/envs/vlfm/lib/python3.9/site-packages/hydra/core/utils.py", line 186, in run_job ret.return_value = task_function(task_cfg) File "/home/tcy/vlfm-main/vlfm/run.py", line 55, in main execute_exp(cfg, "eval" if cfg.habitat_baselines.evaluate else "train") File "/home/tcy/anaconda3/envs/vlfm/lib/python3.9/site-packages/habitat_baselines/run.py", line 62, in execute_exp trainer.eval() File "/home/tcy/anaconda3/envs/vlfm/lib/python3.9/site-packages/habitat_baselines/common/base_trainer.py", line 132, in eval self._eval_checkpoint( File "/home/tcy/vlfm-main/vlfm/utils/vlfm_trainer.py", line 99, in _eval_checkpoint self._init_envs(config, is_eval=True) File "/home/tcy/anaconda3/envs/vlfm/lib/python3.9/site-packages/habitat_baselines/rl/ppo/ppo_trainer.py", line 142, in _init_envs self.envs = construct_envs( File "/home/tcy/anaconda3/envs/vlfm/lib/python3.9/site-packages/habitat_baselines/common/construct_vector_env.py", line 40, in construct_envs scenes = dataset.get_scenes_to_load(config.habitat.dataset) File "/home/tcy/anaconda3/envs/vlfm/lib/python3.9/site-packages/habitat/datasets/pointnav/pointnav_dataset.py", line 51, in get_scenes_to_load raise FileNotFoundError( FileNotFoundError: Could not find dataset file data/datasets/objectnav/hm3d/v1/val