issues
search
jerrodparker20
/
adaptive-transformers-in-rl
Adaptive Attention Span for Reinforcement Learning
130
stars
14
forks
source link
issues
Newest
Newest
Most commented
Recently updated
Oldest
Least commented
Least recently updated
Error reproducing results
#19
doltonfernandes
opened
2 years ago
0
Isn't param "--use_gate" important for Pong?
#18
weihongwei0586
opened
3 years ago
0
Regarding logic for first done indexes
#17
victor-psiori
opened
3 years ago
1
Stable Transformer on Pong
#16
furmans
opened
3 years ago
2
Is this algorithm suitable for off-policy policy?
#15
dbsxdbsx
opened
3 years ago
1
Could you share a pre-trained model of 'Pong' ?
#14
alimai
opened
3 years ago
0
Get Steps 0 @ 0.0 SPS. Loss inf. Stats
#13
ghost
closed
3 years ago
8
train.py on colab, using snippets
#12
yhg8423
closed
4 years ago
4
Gcp debug
#11
shaktikshri
closed
4 years ago
0
Gcp debug adaptive
#10
shaktikshri
closed
4 years ago
0
Gcp debug adaptive
#9
shaktikshri
closed
4 years ago
0
Gcp debug
#8
shaktikshri
closed
4 years ago
0
Gcp debug adaptive
#7
shaktikshri
closed
4 years ago
0
Gcp debug
#6
shaktikshri
closed
4 years ago
0
Reordered initialization of last_n_episode returns (was defined after…
#5
shaktikshri
closed
4 years ago
0
Gcp debug
#4
shaktikshri
closed
4 years ago
0
Dmlab30
#3
shaktikshri
closed
4 years ago
0
Dynamic batching
#2
shaktikshri
closed
4 years ago
0
Find bug
#1
shaktikshri
closed
4 years ago
0