ermongroup / MA-AIRL

Multi-Agent Adversarial Inverse Reinforcement Learning, ICML 2019.
185 stars 28 forks source link

Error running run_mack_gail.py in irl #4

Open Cakhavan opened 3 years ago

Cakhavan commented 3 years ago

I ran run_simple.py on the simple environment. Then I created a pickle of the expert trajectories with running render.py with the modification of this line at the very end:

with open(osp.join('/Users/.../MA-AIRL/multi-agent-irl/sandbox/mack/data/exps/mack/simple/l-0.1-b-1000/seed-1/', 'final.pkl'), 'wb') as fh: fh.write(pkl.dumps(sample_trajs))

Lastly I tried to run borh run_mack_gail.py and run_mack_airl.py and am getting this error. I'm not sure how to fix it. I noticed there was only one agent when printing self.num_agents() so I removed the for loop and just defined this variable like this in gail.py line 252.

a_v = multionehot(av[0], self.n_actions[0])

But I still keep getting a bunch of other errors. Any thoughts on this?

Here is the error I get described when running run_mack_gail.py and run_mack_airl.py

Traceback (most recent call last): File "/opt/anaconda3/envs/rllab_test/lib/python3.5/runpy.py", line 184, in _run_module_as_main "main", mod_spec) File "/opt/anaconda3/envs/rllab_test/lib/python3.5/runpy.py", line 85, in _run_code exec(code, run_globals) File "/Users/.../MA-AIRL/multi-agent-irl/irl/mack/run_mack_airl.py", line 77, in main() File "/opt/anaconda3/envs/rllab_test/lib/python3.5/site-packages/click/core.py", line 829, in call return self.main(args, kwargs) File "/opt/anaconda3/envs/rllab_test/lib/python3.5/site-packages/click/core.py", line 782, in main rv = self.invoke(ctx) File "/opt/anaconda3/envs/rllab_test/lib/python3.5/site-packages/click/core.py", line 1066, in invoke return ctx.invoke(self.callback, ctx.params) File "/opt/anaconda3/envs/rllab_test/lib/python3.5/site-packages/click/core.py", line 610, in invoke return callback(args, **kwargs) File "/Users/.../MA-AIRL/multi-agent-irl/irl/mack/run_mack_airl.py", line 73, in main rew_scale=rew_scale) File "/Users/.../MA-AIRL/multi-agent-irl/irl/mack/run_mack_airl.py", line 42, in train rew_scale=rew_scale) File "/Users/.../MA-AIRL/multi-agent-irl/irl/mack/airl.py", line 538, in learn mh_actions, mh_all_actions, mh_rewards, mh_true_rewards, mh_true_returns = runner.run() File "/Users/.../MA-AIRL/multi-agent-irl/irl/mack/airl.py", line 347, in run actions, values, states = self.model.step(self.obs, self.actions) File "/Users/.../EE556/FinalProj/MA-AIRL/multi-agent-irl/irl/mack/airl.py", line 276, in step for i in range(num_agents) if i != k], axis=1) File "<__array_function__ internals>", line 6, in concatenate ValueError: need at least one array to concatenate