issues
search
ikostrikov
/
implicit_q_learning
MIT License
226
stars
38
forks
source link
issues
Newest
Newest
Most commented
Recently updated
Oldest
Least commented
Least recently updated
Code for Behavior cloning policy
#11
return-sleep
opened
11 months ago
0
Why use a positive learning rate in finetuning?
#10
QinwenLuo
opened
1 year ago
0
The log_prob is not corrected
#9
typoverflow
closed
1 year ago
1
missing dextrous env
#8
Div99
opened
2 years ago
0
A question about the toy umaze environment in Figure 2?
#7
fuyw
closed
2 years ago
2
A small problem
#6
fuyw
closed
2 years ago
0
A question about the `sample_actions()`
#5
fuyw
closed
2 years ago
3
Potential issue in scaling rewards in train_finetune.py.
#4
ethanluoyc
closed
2 years ago
2
conflicting dependencies between optax and jaxlib
#3
enosair
closed
2 years ago
6
Add finetuning experiment.
#2
anair13
closed
2 years ago
0
Finetuning experiment
#1
anair13
closed
3 years ago
0