-
Run the FinRL_MultiCrypto_Trading.py got error, please fix it
```bash
binance successfully connected
tech_indicator_list: ['macd', 'rsi', 'cci', 'dx']
indicator: macd
indicator: rsi
indicator…
-
Traceback (most recent call last):
File "C:\***\marco\Desktop\Stock_invester\model_trainer.py", line 14, in
from finrl.agents.elegantrl.models import DRLAgent as DRLAgent_erl
File "C:\User…
-
I'm trying to use a DDPG agent with actor and critic networks, and a TFUniform replay buffer, training on my custom environment.
I've extracted a training experience from the buffer using:
```
da…
-
Hi, I have two questions about date generation in virtualhome:
1. I have generated the goal of a domain using `vh/data_gene/gen_data/vh_init.py` and the result in pickle is comprised of task_id, task…
-
If anyone else is facing this same issue. I tried reinstalling the metadrive to the latest version but the latest one requires some dependencies incompatible with older python version.
```
Failur…
-
# Why
#### As a
user of PyCMO
#### I want
to train an RL agent to solve the floridistan scenario using the PyCMO Gym environment provided in update 1.4.0
#### So that
I finally have RL agents in CMO…
-
I wanted to run FinRL_PortfolioOptimizationEnv_Demo and changing the Data source to the CAC40 [here:](https://colab.research.google.com/drive/10VxfXrOHfmH2h9aP7nq9zTSvuriKgRkf#scrollTo=aSKpWFzV-DUB)
…
saurL updated
4 months ago
-
As there is not any update for newer mlagents and python version, I decided to open this issue and ask some questions.
As I tried to train my environment in ML- agents by rays and camera too, I found…
-
Support an interactive mode in the grid world where the user can use the keyboard to move their agent around and battle other agents and such.
Gameplan:
- [ ] Create trainer that does not train sinc…
-
Hello,
Thanks for the great package!
I'd like to do multi-GPU parallel sweeps. I have 4 GPUs and I'd like to do a sweep on, say 16 configs. I have this code:
```python
wandb.require("core"…