-
Dear Takuma Seno,
Thanks very much for your great work on d3rlpy.
Currently I'm work on a medical offline deep reinforcement learning academic research project.
I want to reshape the original o…
-
### 📚 Documentation
I've recognized that your RL framework offers lots of nice tools. Nothing is really missing. But the thing is, you're not particularly good at giving a newcomer-friendly quickstar…
-
thx.
-
### Question
Hi,
Anyone know where can we access the ground-truth dynamics model for the mujoco environments? For example, the cart-pole, it seems the dynamics equations are not given in any fi…
-
### What happened + What you expected to happen
Hello Ray RLlib Team,
We've recently concluded our [4-week Deep RL Bootcamp](https://github.com/stresearch/STR_DeepRL) and have been using Ray RLlib…
-
Hi,
I just found this library and it looks quite promising indeed, but it being my current tinkering platform of choice I was wondering whether support for **Raspberry Pi Pico / RP2040** (read: inc…
-
I want to achieve a small function, that is, the formula and equation can achieve step by step calculation.But I looked through the documentation and didn't see any code examples, so I didn't know whe…
-
Hi, I have made a opensim environment of type `gymnasium.Env` and I am trying to pickle this so that I can use multi-processing from the stable-baselines3 RL algorithms.
Stable-baselines3 require …
-
Hello, I would like to use these algorithms on a reinforcement learning environment I've built, but I haven't found the part about generating expert trajectories. If you have the time, please guide me…
-
Hello everyone,
When I request a long route in the germany graph for example and then try to do a trace attributes request on the returned polyline, i get the error that the edge_walk is not possib…