oxwhirl / smac

SMAC: The StarCraft Multi-Agent Challenge
MIT License
1.03k stars 228 forks source link

Run qmix on Rllib does not work #99

Open xiaoToby opened 1 year ago

xiaoToby commented 1 year ago

When I run the example code, I met the error, the error logs is below:

(RolloutWorker pid=44372) ray::RolloutWorker.init() (pid=44372, ip=127.0.0.1, repr=<ray.rllib.evaluation.rollout_worker.RolloutWorker object at 0x000002577D997BB0>) (RolloutWorker pid=44372) File "python\ray_raylet.pyx", line 658, in ray._raylet.execute_task (RolloutWorker pid=44372) File "python\ray_raylet.pyx", line 699, in ray._raylet.execute_task (RolloutWorker pid=44372) File "python\ray_raylet.pyx", line 665, in ray._raylet.execute_task (RolloutWorker pid=44372) File "python\ray_raylet.pyx", line 669, in ray._raylet.execute_task (RolloutWorker pid=44372) File "python\ray_raylet.pyx", line 616, in ray._raylet.execute_task.function_executor (RolloutWorker pid=44372) File "C:\conda\envs\smac\lib\site-packages\ray_private\function_manager.py", line 675, in actor_method_executor (RolloutWorker pid=44372) return method(__ray_actor, *args, *kwargs) (RolloutWorker pid=44372) File "C:\conda\envs\smac\lib\site-packages\ray\util\tracing\tracing_helper.py", line 462, in _resume_span (RolloutWorker pid=44372) return method(self, _args, **_kwargs) (RolloutWorker pid=44372) File "C:\conda\envs\smac\lib\site-packages\ray\rllib\evaluation\rollout_worker.py", line 511, in init (RolloutWorker pid=44372) check_env(self.env) (RolloutWorker pid=44372) File "C:\conda\envs\smac\lib\site-packages\ray\rllib\utils\pre_checks\env.py", line 78, in check_env (RolloutWorker pid=44372) raise ValueError( (RolloutWorker pid=44372) ValueError: Traceback (most recent call last): (RolloutWorker pid=44372) File "C:\conda\envs\smac\lib\site-packages\ray\rllib\utils\pre_checks\env.py", line 65, in check_env (RolloutWorker pid=44372) check_multiagent_environments(env) (RolloutWorker pid=44372) File "C:\conda\envs\smac\lib\site-packages\ray\rllib\utils\pre_checks\env.py", line 268, in check_multiagent_environments (RolloutWorker pid=44372) next_obs, reward, done, info = env.step(sampled_action) (RolloutWorker pid=44372) File "C:\conda\envs\smac\lib\site-packages\ray\rllib\env\wrappers\group_agents_wrapper.py", line 76, in step (RolloutWorker pid=44372) obs, rewards, dones, infos = self.env.step(action_dict) (RolloutWorker pid=44372) File "C:\conda\envs\smac\smac\smac\examples\rllib\env.py", line 82, in step (RolloutWorker pid=44372) raise ValueError( (RolloutWorker pid=44372) ValueError: You must supply an action for agent: 0

how to fix it ,and run it well?

xiaoToby commented 1 year ago

@richardliaw hi, I saw that the part of code about def step(self, action_dict) is written by you. I met some error here, main about the attribute action_dict, there is nothing in action_dict. Could you help me fix it , thanks

xiaoToby commented 1 year ago

@samvelyan please help

MichaelXCChen commented 1 year ago

@xiaoToby A bit late, but if anyone is facing the same issue, it's because rllib performs environment checking before the actual training happens. Some environments are not implemented to accommodate this function (SMAC being one of them). Just disable environment checking by setting disable_env_checking to True in the config dictionary. Also, although this has likely been mentioned elsewhere, you would also need to set simple_optimizer to True as well.