issues
search
MarcoMeter
/
recurrent-ppo-truncated-bptt
Baseline implementation of recurrent PPO using truncated BPTT
MIT License
123
stars
16
forks
source link
issues
Newest
Newest
Most commented
Recently updated
Oldest
Least commented
Least recently updated
Question Regarding Sequence Length
#17
Davide236
closed
1 month ago
3
update the numpy version to avoid errors
#16
RobbenRibery
opened
6 months ago
0
Fixed sequence length in recurrent training does not add unnecessary …
#15
MarcoMeter
closed
1 year ago
0
Pre-trained Models Do Not Work
#14
WilliamYue37
opened
1 year ago
7
How to fix the problem with "Segmentation fault (core dumped)"
#13
jiashuncheng
closed
1 year ago
0
Merge develop into main
#12
MarcoMeter
closed
1 year ago
0
about sequence_length
#11
xixiha5230
closed
1 year ago
2
Masked mean for advantage normalization?
#10
finnBsch
closed
1 year ago
8
Excuse me,how enjoy the model “./models/cartpole_masked.nn”? When I run enjoy.py , Show "RuntimeError: expected scalar type Double but found Float "
#9
jialuyu61
closed
1 year ago
4
Can this repo train continuous environments?
#8
1900360
closed
1 year ago
7
Suggestions for training on multiple environments simultaneously?
#7
fedshyvana
closed
2 years ago
3
Possibility to reference the implementation
#6
RobvanGastel
closed
2 years ago
6
Calculation of the Generalized Advantage Estimation
#5
RobvanGastel
closed
3 years ago
5
Code is more efficient now
#4
MarcoMeter
closed
3 years ago
0
Adapting the repo to my specific problem
#3
VVIERV00
closed
3 years ago
8
Merge Develop to initially publish this Repo
#2
MarcoMeter
closed
3 years ago
0
Temp
#1
MarcoMeter
closed
3 years ago
0