issues
search
openai
/
maddpg
Code for the MADDPG algorithm from the paper "Multi-Agent Actor-Critic for Mixed Cooperative-Competitive Environments"
https://arxiv.org/pdf/1706.02275.pdf
MIT License
1.65k
stars
494
forks
source link
issues
Newest
Newest
Most commented
Recently updated
Oldest
Least commented
Least recently updated
Error when setting display to true
#31
njfdiem
opened
5 years ago
3
TypeError: set_color() got multiple values for argument 'alpha' in Simple-Crypto
#30
marwanihab
opened
5 years ago
6
TypeError: must be str, not NoneType . run train.py
#29
SHYang1210
closed
5 years ago
2
Nontype flaw in "train.py", line 182
#28
DailinH
opened
5 years ago
3
Can this algorithm be generalised to work with multiple (60) agents competing against eachother?
#27
alexanderkell
closed
3 years ago
2
Cumulative rewards are not promoted when use MADDPG
#26
jhcknzzm
opened
5 years ago
0
use tf.layers and add gpu_options.allow_growth=True
#25
GoingMyWay
closed
5 years ago
0
update README with repo status
#24
christopherhesse
closed
6 years ago
0
The result is not that ideal like the paper showed
#23
Jarvis-K
opened
6 years ago
3
Having trouble with import maddpg
#22
ishanivyas
opened
6 years ago
1
Please add a description to this repo
#21
clintonyeb
opened
6 years ago
0
Calculating Success Rate for Physical Deception
#20
ZishunYu
closed
6 years ago
1
Q divergence
#19
rbrigden
opened
6 years ago
0
How or why the gaussian distribution contributes to the training?
#18
Chen-Joe-ZY
opened
6 years ago
4
Import Errors
#17
murtazarang
closed
3 years ago
6
How maddpg update actor?
#16
newbieyxy
closed
6 years ago
0
Error in scenario simple_reference with gym.spaces.MultiDiscrete
#15
hcch0912
closed
6 years ago
1
displaying agent behaviors on the screen
#14
williamyuanv0
closed
6 years ago
1
When I run train.py,it shows "TypeError: Can't convert 'NoneType' object to str implicitly".
#13
seahawkk
closed
6 years ago
4
Cannot reproduce experiment results
#12
arbaazkhan2
closed
6 years ago
3
The reward and action is nan ?
#11
xuemei-ye
closed
6 years ago
3
How can i use it for "simple_world_comm" in MPE? ---- "AssertionError: nvec should be a 1d array (or list) of ints"
#10
zimoqingfeng
closed
6 years ago
1
action exploration & Gumbel-Softmax
#9
djbitbyte
opened
6 years ago
9
It seems that you don't use "Policy ensembles" and "Inferring policies of other agent" in this code?
#8
pengzhenghao
closed
6 years ago
2
It seems that the training is decentralized?
#7
pengzhenghao
closed
6 years ago
1
Running train.py doesn't seem to work
#6
suryabhupa
closed
6 years ago
5
when I run train.py,it shows "module 'tensorflow' has no attribute 'float32'"
#5
williamyuanv0
closed
6 years ago
8
add multiagent-particle-envs to PYTHONPATH
#4
djbitbyte
closed
6 years ago
4
fix some miss
#3
wwxFromTju
closed
6 years ago
0
rnn_]cell=None is a Syntax Error in both Python 2 and 3
#2
cclauss
closed
6 years ago
1
Remove pycache and add gitignore
#1
himanshub16
closed
6 years ago
0
Previous