issues
search
cyanrain7
/
TRPO-in-MARL
MIT License
186
stars
49
forks
source link
issues
Newest
Newest
Most commented
Recently updated
Oldest
Least commented
Least recently updated
dependency issue
#20
sheldon123z
opened
1 year ago
1
Questions about visualization
#19
elvira-maverick
opened
1 year ago
0
Question about HAPPO performance in StarCraftII
#18
zhang275
opened
1 year ago
0
what to do with a dead agent
#17
xialuanshi
opened
2 years ago
1
add rnn in hatrpo
#16
JunjunjunHJ
closed
2 years ago
0
Confused about the results of IPPO and MAPPO.
#15
guojm14
closed
2 years ago
5
The Script code runs wrong when applying the HATRPO algorithm with 【rnn】 network.
#14
junjunjun-learner
closed
2 years ago
1
The
#13
junjunjun-learner
closed
2 years ago
0
Do you have PyMARL implementation?
#12
GoingMyWay
opened
2 years ago
1
Question about observation and state in multi-agent mujoco tasks
#11
maxiao94
closed
2 years ago
1
Some questions about HAPPO implementation
#10
sachiel321
closed
2 years ago
2
conflicting dependicies and distribution of some packages not found
#9
NessrineTrabelsi
closed
2 years ago
1
I have some questions about the adjustment of experiment parameters.
#8
chillybird
closed
2 years ago
3
I found a bug in file 'utils/util.py'. If we use discrete action space in 'runners\separated\mujoco_runner.py' and store it's transition in buffer, we will get a bug. Because the act_shape is a constant value.
#7
chillybird
closed
2 years ago
2
The question about critic loss
#6
rayLrayL
closed
2 years ago
0
gym error
#5
wanghui589
closed
2 years ago
0
muti_env_error
#4
wanghui589
closed
2 years ago
4
I found that the action value exceeds the limit
#3
ollehhello
closed
2 years ago
1
How do you use global information and local information in multi-agent mujoco?
#2
Weiyuhong-1998
closed
2 years ago
1
About the number of Critic Networks
#1
Sunrisulfr
closed
2 years ago
3