issues
search
nicklashansen
/
tdmpc
Code for "Temporal Difference Learning for Model Predictive Control"
MIT License
346
stars
55
forks
source link
issues
Newest
Newest
Most commented
Recently updated
Oldest
Least commented
Least recently updated
[Q]: Manipulator tasks and visualization
#19
Ozzey
closed
6 months ago
2
Package for Plots
#18
MianchuWang
closed
1 year ago
1
I encountered some problems during training
#17
like2000522
closed
1 year ago
2
Discrepancy between code and implementation regarding the planned action
#16
odelalleau
closed
1 year ago
2
How to visualize the evaluation process?
#15
zsn2021
closed
1 year ago
1
[Q]: Termination Prevention Logic
#14
ArashAhmadian
closed
9 months ago
2
Intuition behind using zero initialization in critic and reward model last layer
#13
hdadong
closed
1 year ago
1
Implementation in Openai Atari gym
#12
zrbak
closed
1 year ago
1
change obs_shp from numpy float to int
#11
chamorajg
closed
1 year ago
0
Multimodal data as input to the model
#10
SergioArnaud
closed
1 year ago
1
Why don't have Meta-World task in the task.txt?
#9
945716994
closed
1 year ago
1
the setting of random seed
#8
Arya87
closed
1 year ago
1
Why I can't save the video?
#7
Bailey-24
closed
2 years ago
1
Question about base.device
#6
pickxiguapi
closed
2 years ago
1
Intuition behind using LayerNorm in Q function?
#5
mch5048
closed
2 years ago
2
Typo in the arxived paper and some question on the notation.
#4
mch5048
closed
2 years ago
1
A Question about mpc and td-mpc
#3
wenzhoulyu
closed
2 years ago
4
Other envs
#2
wenzhoulyu
closed
2 years ago
2
how to adapt to discrete action space
#1
Howuhh
closed
2 years ago
1