-
Dyna-Q is a conceptual algorithm that illustrates how real and simulated experience can be combined in building a policy. Planning in RL terminology refers to using simulated experience generated by a…
-
Hi,
I am struggling with a recurring error occurring occurring in the abundance model I am trying to run on my university's HPC for a hurdle approach. The model runs fine for the presence-absence …
-
Hey, I found out that the default shape of the latent space shape for DMC Proprio is 1024 which is way bigger the the shape of the observations. Can you explain me why ?
-
# Describe the bug
Problem with Unit 3: Deep Q-Learning with Atari Games 👾 using RL Baselines3 Zoo
Hello, I have an issue with pushing the model to hub.
I execute the line:
`!python -m rl_zoo3.…
-
Hello, how long does this take to train? What is the reason why the robot is still spinning in the same place after I have trained for a day or so? I hope to get your reply.
GXJll updated
1 month ago
-
hello,I have a question for you about this code. recently, I have worked on this project about model-based RL-Learning based MPC, I want to know whether you have the paper or anything else about this …
-
Hi! Please I am using a custom model visualization/simulation framework based on C# WPF viewport3D . I would like to kindly ask how i can retrieve the state of each joint transformation so that I can …
-
The idea is two actions (L vs. R), where choosing that action has different probabilities of reward (p1=0.8 and p2=0.2), and those probabilities flip every 5 trials.
This is almost the same as Scha…
-
**Is your feature request related to a problem? Please describe.**
No, the feature is about enabling ML models to work with Dojo.
**Describe the solution you'd like**
The idea is to integrate ver…
-
No issue. just incredible work. Amazing code too, kinda messy and hard to interpret but that's because it's extremely general and efficient.
Great work!!!!!!!