Closed notFoundThisPerson closed 1 year ago
Could you share the commands you run and the output of those?
My evaluation command is
python hulc2/evaluation/evaluate_policy.py --train_folder checkpoints/HULC2_D_D/real_world_checkpoints/lang_lfp_single --checkpoint 17 --aff_train_folder checkpoints/HULC2_D_D/real_world_checkpoints/aff_model_single --aff_checkpoint last --dataset_path /home/zlm/Projects/datasets/taco-robot/500k_all_tasks_dataset_15hz --debug
Before evaluation I edited the checkpoints/HULC2_D_D/real_world_checkpoints/lang_lfp_single/.hydra/config.yaml
and replaced the datamodule from hulc2.datasets.play_data_module.PlayDataModule
to hulc2.datasets.hulc2_sim_data_module.Hulc2RealWorldDataModule
since there is no "training" and "validation" folder in 500k_all_tasks_dataset_15hz, maybe some preprocess should be applied to the raw dataset?
The output goes as follows:
python hulc2/evaluation/evaluate_policy.py --train_folder checkpoints/HULC2_D_D/real_world_checkpoints/lang_lfp_single --checkpoint 17 --aff_train_folder checkpoints/HULC2_D_D/real_world_checkpoints/aff_model_single --aff_checkpoint last --dataset_path /home/zlm/Projects/datasets/taco-robot/500k_all_tasks_dataset_15hz --debug
Global seed set to 0
Warning: Your Kaggle API key is readable by other users on this system! To fix this, you can run 'chmod 600 /home/zlm/.kaggle/kaggle.json'
trying to load lang data from: /home/zlm/Projects/datasets/taco-robot/500k_all_tasks_dataset_15hz/lang_paraphrase-MiniLM-L3-v2_singleTasks/auto_lang_ann.npy
trying to load lang data from: /home/zlm/Projects/datasets/taco-robot/500k_all_tasks_dataset_15hz/lang_paraphrase-MiniLM-L3-v2_singleTasks/auto_lang_ann.npy
Traceback (most recent call last):
File "/home/zlm/hulc2/hulc2/evaluation/evaluate_policy.py", line 94, in <module>
main()
File "/home/zlm/hulc2/hulc2/evaluation/evaluate_policy.py", line 87, in main
eval = Evaluation(args, checkpoint, env)
File "/home/zlm/hulc2/hulc2/evaluation/evaluation.py", line 54, in __init__
model, env, _, lang_embeddings = self.policy_manager.get_default_model_and_env(
File "/home/zlm/hulc2/hulc2/evaluation/manager_aff_lmp.py", line 122, in get_default_model_and_env
env = get_env(
File "/home/zlm/hulc2/hulc2/evaluation/utils.py", line 221, in get_env
render_conf = OmegaConf.load(Path(dataset_path) / ".hydra" / "merged_config.yaml")
File "/home/zlm/anaconda3/envs/hulc2/lib/python3.9/site-packages/omegaconf/omegaconf.py", line 183, in load
with io.open(os.path.abspath(file_), "r", encoding="utf-8") as f:
FileNotFoundError: [Errno 2] No such file or directory: '/home/zlm/Projects/datasets/taco-robot/500k_all_tasks_dataset_15hz/.hydra/merged_config.yaml'
When I try to run the evaluation with CALVIN task_D_D dataset, the output goes:
python hulc2/evaluation/evaluate_policy.py --train_folder checkpoints/HULC2_D_D/real_world_checkpoints/lang_lfp_single --checkpoint 17 --aff_train_folder checkpoints/HULC2_D_D/real_world_checkpoints/aff_model_single --aff_checkpoint last --dataset_path /home/zlm/Projects/datasets/calvin/task_D_D --debug
Global seed set to 0
trying to load lang data from: /home/zlm/Projects/datasets/calvin/task_D_D/training/lang_paraphrase-MiniLM-L3-v2/auto_lang_ann.npy
trying to load lang data from: /home/zlm/Projects/datasets/calvin/task_D_D/validation/lang_paraphrase-MiniLM-L3-v2/auto_lang_ann.npy
pybullet build time: May 20 2022 19:45:31
argv[0]=--width=200
argv[1]=--height=200
EGL device choice: -1 of 1.
Loaded EGL 1.5 after reload.
GL_VENDOR=NVIDIA Corporation
GL_RENDERER=NVIDIA GeForce RTX 3070/PCIe/SSE2
GL_VERSION=4.6.0 NVIDIA 470.199.02
GL_SHADING_LANGUAGE_VERSION=4.60 NVIDIA
Version = 4.6.0 NVIDIA 470.199.02
Vendor = NVIDIA Corporation
Renderer = NVIDIA GeForce RTX 3070/PCIe/SSE2
ven = NVIDIA Corporation
Traceback (most recent call last):
File "/home/zlm/hulc2/hulc2/evaluation/evaluate_policy.py", line 94, in <module>
main()
File "/home/zlm/hulc2/hulc2/evaluation/evaluate_policy.py", line 87, in main
eval = Evaluation(args, checkpoint, env)
File "/home/zlm/hulc2/hulc2/evaluation/evaluation.py", line 54, in __init__
model, env, _, lang_embeddings = self.policy_manager.get_default_model_and_env(
File "/home/zlm/hulc2/hulc2/evaluation/manager_aff_lmp.py", line 129, in get_default_model_and_env
rollout_cfg = OmegaConf.load(Path(__file__).parents[2] / "config/hulc2/rollout/aff_hulc2.yaml")
File "/home/zlm/anaconda3/envs/hulc2/lib/python3.9/site-packages/omegaconf/omegaconf.py", line 183, in load
with io.open(os.path.abspath(file_), "r", encoding="utf-8") as f:
FileNotFoundError: [Errno 2] No such file or directory: '/home/zlm/hulc2/config/hulc2/rollout/aff_hulc2.yaml'
disconnecting id 0 from server
Destroy EGL OpenGL window.
Thank you very much!
You are trying to evaluate the real world checkpoints with the CALVIN sim benchmark evaluation script, which won't work. I have updated the instructions to do real world inference with a Panda robot and the provided checkpoints, you need to use hulc2/rollout/real_world_eval_combined.py
.
Thank you very nuch, I'll try it
Thank you very much for sharing your great work, but I found some errors when running the evaluate_policy.py on your pretrained checkpoints as indicated in the README.md, and I'm not sure whether it's a bug or some extra configureations should be done before running the code: