openai / maddpg

Code for the MADDPG algorithm from the paper "Multi-Agent Actor-Critic for Mixed Cooperative-Competitive Environments"
https://arxiv.org/pdf/1706.02275.pdf
MIT License
1.65k stars 494 forks source link

When I run train.py,it shows "TypeError: Can't convert 'NoneType' object to str implicitly". #13

Closed seahawkk closed 6 years ago

seahawkk commented 6 years ago

I have configured MPE and MADDPG, but some errors occurred while I was running train.py.(python train.py --scenario simple) I can see the results of the program.

steps: 1374975, episodes: 55000, mean episode reward: -6.977302725370943, time: 14.844 steps: 1399975, episodes: 56000, mean episode reward: -6.743868053210359, time: 14.563 steps: 1424975, episodes: 57000, mean episode reward: -6.622564807563306, time: 14.43 steps: 1449975, episodes: 58000, mean episode reward: -6.21897592491655, time: 14.479 steps: 1474975, episodes: 59000, mean episode reward: -6.874195291324205, time: 14.642 steps: 1499975, episodes: 60000, mean episode reward: -6.769229165363719, time: 14.58 Traceback (most recent call last): File "train.py", line 195, in train(arglist) File "train.py", line 184, in train rew_file_name = arglist.plots_dir + arglist.exp_name + '_rewards.pkl' TypeError: Can't convert 'NoneType' object to str implicitly

I don't know where the problem is. Look forward to your reply! Thanks!

JinTanda commented 6 years ago

I also had the problem. You have to decide where you want to save results in ' parse_args' method because default setting is none.

SHYang1210 commented 5 years ago

how do you determine the location of the saved results?i always wrong

LQHEstelle commented 5 years ago

how do you determine the location of the saved results?

zhinengshidai commented 5 years ago

I solved it by chang " parser.add_argument("--exp-name", type=str, default=None, help="name of the experiment")" to " parser.add_argument("--exp-name", type=str, default="XXX", help="name of the experiment")"