-
C:\Users\hp\.conda\envs\py39\python.exe D:\DaiMa\DDPG-RIS-MADDPG-POWER-main\marl_test.py
------------- lanes are -------------
up_lanes : [200.875, 202.625, 400.875, 402.625]
down_lanes : [197.375…
-
Per https://github.com/lefnire/tforce_btc_trader/issues/6#issuecomment-364179764, I'd like to try the DDPG RL agent (compared to PPO agent). DDPG hypers will need to be added to hypersearch, and likel…
-
------------- lanes are -------------
up_lanes : [200.875, 202.625, 400.875, 402.625]
down_lanes : [197.375, 199.125, 397.375, 399.125]
left_lanes : [200.875, 202.625, 400.875, 402.625]
right_lane…
-
Can someone please tell me how to save and load a model in the DDPG implementation?
-
Hi, may I ask a question about pretraining fb_ddpg?
I installed the whole package.
Then, run pretrain.py to train fb_ddpg agent on walker_walk task without goal space.
python pretrain.py agent=…
-
Currently, there is a working multi-agent PPO implementation here:
[https://github.com/matteobettini/rl/blob/mappo_ippo/examples/multiagent/mappo_ippo.py](url)
and a working single-agent DDPG impl…
-
I tried to configure TensorFlow on Mac OS and Win11 but it ran with errors:
```
2024-05-21 11:36:56.169279: I tensorflow/core/util/port.cc:113] oneDNN custom operations are on. You may see slightly …
-
Hi all,
I want to apply reinforcement learning using multi agent, specifically algorithms are PPO, TRPO, DDPG and A2C. I don't understand how to write Carla environment for these algorithm. Is any …
-
I want to train an agent using DDPG with low-dim input and use it as a teacher to train a imitation learning agent. However, after training DDPG, when I set vision=True, I found DDPG can not perform w…
-
Hello, I have a question that I would like to ask you. I'm not quite sure what the difference is between DDPG and Dec-DDPG? Can you guide me? Thank you very much!