-
-
Hello, thank you very much for your contribution. The Modular-Agent has been incredibly useful and powerful for completing RL/IL training in the Unity.
I have been using the Modular-Agent for an in…
HYS-5 updated
2 months ago
-
Thanks for such clear code!
l have some questions on the dataset provided by https://drive.google.com/drive/folders/1h3H4AY_ZBx08hz-Ct0Nxxus-V1melu1U.
Does this dataset contains multiple full episo…
-
how to run gail code
-
Is there a reason you calculate the reward the way you do in line 69?
https://github.com/toshikwa/gail-airl-ppo.pytorch/blob/4e13a23454600a16d5aeeeb4c09338308115455e/gail_airl_ppo/algo/airl.py#L69
…
-
Need to decide 2-3 base tests where ONLY gail and consequently wgail is tested.
-
Hi JongseongChae, I am wondering how to only run Gail algorithm using your code. Could you give me the run example for gail?
-
> Der nächste Schritt wäre einen Agenten mit zwei Optimierungsalgorithmen zu trainieren. Hierfür könnten Sie im Reinforcement Learning-Bereich den PPO und DQN Algorithmus verwenden. Sie könnten aber a…
-
I noticed that the predict reward function uses log(D(.)) - log(1-D(.)) as the reward to update the generator. However, this is the reward function proposed in the AIRL paper which minimizes the rever…
-
### Search before asking
- [X] I had searched in the [issues](https://github.com/ray-project/ray/issues) and found no similar feature requirement.
### Description
Create an agent for the Generativ…