Closed ZhangNYG closed 5 years ago
Multi-Agent Generative Adversarial Imitation Learning ICLR 2018 Workshop Multi-Agent Adversarial Inverse Reinforcement Learning, ICML 2019. 这两篇论文 不能算是一稿多投是吧
这两个文章讨论的不是一个问题,后者是前者的follow up。 后者的算法比前者多了学习agent的reward function,并且两个文章的multi-agent RL的设定也不太一样,前者是从Nash Equilibrium,后者是从一种Quantal response equilibrium出发的。
Multi-Agent Generative Adversarial Imitation Learning ICLR 2018 Workshop Multi-Agent Adversarial Inverse Reinforcement Learning, ICML 2019. 这两篇论文 不能算是一稿多投是吧