-
======DDPG Validation from: 2021-10-04 to 2022-01-03
======Best Model Retraining from: 2010-01-01 to 2022-01-03
======Trading from: 2022-01-03 to 2022-04-04
----------------------------------…
-
**Describe the bug**
when i try to run below script i am getting type error could you please advise ?
Error Message
**Desktop (please complete the following information)…
-
thank you so much for nice job.
I want to implement one of the algorithms without gradient in this project and compare the results with the algorithms in this project such as actorcritic, dqn ,rei…
-
I managed to get most algorithms running on the Sparse regression example, but not the DRLS algorithm. I guess it should be applicable to sparse regression, but what needs to be done with:
drls = Pro…
-
> **Here are the improvements made to the code:**
> **1 - Imported `with_common_config`, `Trainer`, and `COMMON_CONFIG `to make the code cleaner and more concise.**
> **2 - Utilized individual a…
-
In the commit we still have
self.act = act_class(net_dim, state_dim, action_dim).to(self.device)
self.cri = cri_class(net_dim, state_dim, action_dim).to(self.device) \
i…
-
Dear members of the Ray team,
I am working with DRL algorithms using rllib. I am configuring and testing multiple experiments using the Tune API (tune.run()) as well as the different implemented DR…
-
It's really a very nice project, tactile robotics is a novel topic.
I am the developer of [DI-engine](https://github.com/opendilab/DI-engine), looking for some interesting environments to apply adv…
-
### What happened + What you expected to happen
After training Multi Agent PPO with new New API Stack under the guidance of [ how-to-use-the-new-api-stack](https://docs.ray.io/en/latest/rllib/rllib-n…
-
I run ./onpolicy/scripts/train_mpe_scripts/train_mpe_spread.sh after change 'algo' to mappo and user_name to my wandb user name in train_mpe_spread.sh. My train_mpe_spread.sh is as follows:
```text
…