niiceMing / CMTA

(NIPS23)Contrastive Modules with Temporal Attention for Multi-Task Reinforcement Learning
5 stars 0 forks source link

按照 Getting Started 都跑不起来代码,请给一个详细的跑代码的过程 #12

Open yyds-xtt opened 1 month ago

yyds-xtt commented 1 month ago

Getting Started We should install the local mtenv lib at first:

cd src/mtenv pip install -e . Then you can use the following instructions to run CMTA:

cd scripts bash CMTA.sh $seed$ $ seed $ can be 1,10,42,...

niiceMing commented 3 weeks ago

没报错信息咋看啊

yyds-xtt commented 3 weeks ago

(1)-----------------------------------------------config/config.yaml 文件改成了下面----------------------------------------------- defaults:

experiment: name: metaworld-mt10 num_eval_episodes: 1 num_train_steps: 2500000 eval_only: False random_pos: False save_dir: '/data/zzm/CMTA-main/save_dir' builder: Experiment

replay_buffer: batch_size: 1280 agent: multitask: num_envs: 10 should_use_disjoint_policy: True should_use_disentangled_alpha: True should_use_task_encoder: True should_use_multi_head_policy: False actor_cfg: should_condition_model_on_task_info: False should_condition_encoder_on_task_info: True should_concatenate_task_info_with_encoder: True task_encoder_cfg: model_cfg: pretrained_embedding_cfg: should_use: False encoder: type_to_select: moe moe: # 添加 moe 部分 task_id_to_encoder_id_cfg: mode: rnn_attention # 配置具体的参数 num_experts: 6 (2)--------------------------------------------执行命令报错:-----------------------------------------------------------------

(torch1.8) zzm@amax:~/CMTA-main/scripts$ bash CMTA.sh 1 ../main.py:11: UserWarning: The version_base parameter is not specified. Please specify a compatability version level, or None. Will assume defaults for version 1.1 @hydra.main(config_path="config", config_name="config") /data/zzm/anaconda3/envs/torch1.8/lib/python3.6/site-packages/hydra/_internal/defaults_list.py:251: UserWarning: In 'config': Defaults list is missing _self_. See https://hydra.cc/docs/1.2/upgrades/1.0_to_1.1/default_composition_order for more information warnings.warn(msg, UserWarning) /data/zzm/anaconda3/envs/torch1.8/lib/python3.6/site-packages/hydra/core/default_element.py:128: UserWarning: In 'logbook/mtrl': Usage of deprecated keyword in package header '# @package group'. See https://hydra.cc/docs/1.2/upgrades/1.0_to_1.1/changes_to_package_header for more information See {url} for more information""" /data/zzm/anaconda3/envs/torch1.8/lib/python3.6/site-packages/hydra/core/default_element.py:128: UserWarning: In 'agent/state_sac': Usage of deprecated keyword in package header '# @package group'. See https://hydra.cc/docs/1.2/upgrades/1.0_to_1.1/changes_to_package_header for more information See {url} for more information""" /data/zzm/anaconda3/envs/torch1.8/lib/python3.6/site-packages/hydra/core/default_element.py:128: UserWarning: In 'env/metaworld-mt10': Usage of deprecated keyword in package header '# @package group'. See https://hydra.cc/docs/1.2/upgrades/1.0_to_1.1/changes_to_package_header for more information See {url} for more information""" /data/zzm/anaconda3/envs/torch1.8/lib/python3.6/site-packages/hydra/core/default_element.py:128: UserWarning: In 'setup/metaworld': Usage of deprecated keyword in package header '# @package group'. See https://hydra.cc/docs/1.2/upgrades/1.0_to_1.1/changes_to_package_header for more information See {url} for more information""" /data/zzm/anaconda3/envs/torch1.8/lib/python3.6/site-packages/hydra/_internal/hydra.py:127: UserWarning: Future Hydra versions will no longer change working directory at job runtime by default. See https://hydra.cc/docs/1.2/upgrades/1.1_to_1.2/changes_to_job_working_dir/ for more information. configure_logging=with_log_configuration, setup: seed: 1 setup: metaworld algo: CMTA_info2500_mt10 base_path: /data/zzm/CMTA-main/scripts dir_name: logs_fix save_dir: ${setup.base_path}/${setup.dir_name}/${setup.id} device: cuda:0 id: CMTA_info2500_mt10_seed_1 description: Sample Task tags: null git: commit_id: null has_uncommitted_changes: null issue_id: null date: '2024-10-31 21:17:26' slurm_id: '-1' debug: should_enable: false env: name: metaworld-mt10 num_envs: 10 benchmark: target: metaworld.MT10 builder: make_kwargs: should_perform_reward_normalization: true dummy: target: metaworld.MT1 env_name: pick-place-v1 description: reach-v1: Reach a goal position. Randomize the goal positions. push-v1: Push the puck to a goal. Randomize puck and goal positions. pick-place-v1: Pick and place a puck to a goal. Randomize puck and goal positions. door-open-v1: Open a door with a revolving joint. Randomize door positions. drawer-open-v1: Open a drawer. Randomize drawer positions. drawer-close-v1: Push and close a drawer. Randomize the drawer positions. button-press-topdown-v1: Press a button from the top. Randomize button positions. peg-insert-side-v1: Insert a peg sideways. Randomize peg and goal positions. window-open-v1: Push and open a window. Randomize window positions. window-close-v1: Push and close a window. Randomize window positions. ordered_task_list: null agent: name: state_sac encoder_feature_dim: 64 num_layers: 0 num_filters: 0 builder: target: mtrl.agent.sac.Agent actor_cfg: ${agent.actor} critic_cfg: ${agent.critic} multitask_cfg: ${agent.multitask} alpha_optimizer_cfg: ${agent.optimizers.alpha} actor_optimizer_cfg: ${agent.optimizers.actor} critic_optimizer_cfg: ${agent.optimizers.critic} discount: 0.99 init_temperature: 1.0 actor_update_freq: 1 critic_tau: 0.005 critic_target_update_freq: 1 encoder_tau: 0.05 multitask: num_envs: 10 should_use_disentangled_alpha: true should_use_task_encoder: true should_use_multi_head_policy: false actor_cfg: should_condition_model_on_task_info: false should_condition_encoder_on_task_info: true should_concatenate_task_info_with_encoder: true task_encoder_cfg: model_cfg: pretrained_embedding_cfg: should_use: false encoder: type_to_select: moe moe: task_id_to_encoder_id_cfg: mode: rnn_attention num_experts: 6 logbook: target: ml_logger.logbook.make_config write_to_console: false logger_dir: ${setup.save_dir} create_multiple_log_files: false experiment: num_eval_episodes: 1 num_train_steps: 2500000 eval_only: false random_pos: false save_dir: /data/zzm/CMTA-main/save_dir builder: mtrl.experiment.multitask.Experiment replay_buffer: batch_size: 1280

[2024-10-31 21:17:26,428][default_logger][INFO] - {"setup": {"seed": 1, "setup": "metaworld", "algo": "CMTA_info2500_mt10", "base_path": "/data/zzm/CMTA-main/scripts", "dir_name": "logs_fix", "save_dir": "${setup.base_path}/${setup.dir_name}/${setup.id}", "device": "cuda:0", "id": "CMTA_info2500_mt10_seed_1", "description": "Sample Task", "tags": null, "git": {"commit_id": null, "has_uncommitted_changes": null, "issue_id": null}, "date": "2024-10-31 21:17:26", "slurm_id": "-1", "debug": {"should_enable": false}}, "env": {"name": "metaworld-mt10", "num_envs": 10, "benchmark": {"target": "metaworld.MT10"}, "builder": {"make_kwargs": {"should_perform_reward_normalization": true}}, "dummy": {"target": "metaworld.MT1", "env_name": "pick-place-v1"}, "description": {"reach-v1": "Reach a goal position. Randomize the goal positions.", "push-v1": "Push the puck to a goal. Randomize puck and goal positions.", "pick-place-v1": "Pick and place a puck to a goal. Randomize puck and goal positions.", "door-open-v1": "Open a door with a revolving joint. Randomize door positions.", "drawer-open-v1": "Open a drawer. Randomize drawer positions.", "drawer-close-v1": "Push and close a drawer. Randomize the drawer positions.", "button-press-topdown-v1": "Press a button from the top. Randomize button positions.", "peg-insert-side-v1": "Insert a peg sideways. Randomize peg and goal positions.", "window-open-v1": "Push and open a window. Randomize window positions.", "window-close-v1": "Push and close a window. Randomize window positions."}, "ordered_task_list": null}, "agent": {"name": "state_sac", "encoder_feature_dim": 64, "num_layers": 0, "num_filters": 0, "builder": {"target": "mtrl.agent.sac.Agent", "actor_cfg": "${agent.actor}", "critic_cfg": "${agent.critic}", "multitask_cfg": "${agent.multitask}", "alpha_optimizer_cfg": "${agent.optimizers.alpha}", "actor_optimizer_cfg": "${agent.optimizers.actor}", "critic_optimizer_cfg": "${agent.optimizers.critic}", "discount": 0.99, "init_temperature": 1.0, "actor_update_freq": 1, "critic_tau": 0.005, "critic_target_update_freq": 1, "encoder_tau": 0.05}, "multitask": {"num_envs": 10, "should_use_disentangled_alpha": true, "should_use_task_encoder": true, "should_use_multi_head_policy": false, "actor_cfg": {"should_condition_model_on_task_info": false, "should_condition_encoder_on_task_info": true, "should_concatenate_task_info_with_encoder": true}, "task_encoder_cfg": {"model_cfg": {"pretrained_embedding_cfg": {"should_use": false}}}}, "encoder": {"type_to_select": "moe", "moe": {"task_id_to_encoder_id_cfg": {"mode": "rnn_attention"}, "num_experts": 6}}}, "logbook": {"target": "ml_logger.logbook.make_config", "write_to_console": false, "logger_dir": "${setup.save_dir}", "create_multiple_log_files": false}, "experiment": {"num_eval_episodes": 1, "num_train_steps": 2500000, "eval_only": false, "random_pos": false, "save_dir": "/data/zzm/CMTA-main/save_dir", "builder": "mtrl.experiment.multitask.Experiment"}, "replay_buffer": {"batch_size": 1280}, "status": "RUNNING", "logbook_id": "0", "logbook_timestamp": "09:17:26PM CST Oct 31, 2024", "logbook_type": "metadata"} Starting Experiment at Thu Oct 31 21:17:26 2024 torch version = 1.8.0 Error executing job with overrides: ['setup=metaworld', 'setup.algo=CMTA_info2500_mt10', 'env=metaworld-mt10', 'agent=state_sac', 'experiment.num_eval_episodes=1', 'experiment.num_train_steps=2500000', 'experiment.eval_only=False', 'experiment.random_pos=False', 'setup.seed=1', 'setup.dir_name=logs_fix', 'replay_buffer.batch_size=1280', 'agent.multitask.num_envs=10', 'agent.multitask.should_use_disentangled_alpha=True', 'agent.multitask.should_use_task_encoder=True', 'agent.encoder.type_to_select=moe', 'agent.multitask.should_use_multi_head_policy=False', 'agent.encoder.moe.task_id_to_encoder_id_cfg.mode=rnn_attention', 'agent.encoder.moe.num_experts=6', 'agent.multitask.actor_cfg.should_condition_model_on_task_info=False', 'agent.multitask.actor_cfg.should_condition_encoder_on_task_info=True', 'agent.multitask.actor_cfg.should_concatenate_task_info_with_encoder=True', 'agent.multitask.task_encoder_cfg.model_cfg.pretrained_embedding_cfg.should_use=False'] Cannot instantiate config of type str. Top level config must be an OmegaConf DictConfig/ListConfig object, a plain dict/list, or a Structured Config class or instance.

Set the environment variable HYDRA_FULL_ERROR=1 for a complete stack trace.

yyds-xtt commented 3 weeks ago

按照命令跑的就是跑不起来

niiceMing commented 3 weeks ago

https://github.com/facebookresearch/mtrl 你先试试把这个代码跑起来吧

yyds-xtt commented 3 weeks ago

大佬,config/config.yaml文件中 builder的值是什么呢?感觉是它没写对。刚接触 多任务强化学习,大佬指导一下呗 experiment: name: metaworld-mt10 num_eval_episodes: 1 num_train_steps: 2500000 eval_only: False random_pos: False save_dir: '/data/zzm/CMTA-main/save_dir' builder: Experiment