Closed josayarh closed 4 years ago
Hi @josayarh, I take it you're training on the same machine as the player is playing? If so, 0.10 and BC would probably be your best option. It was pretty limited, however, as it only allowed you to have agents that purely learn from the player (and not from any other reward signal), and couldn't learn very complex behavior without an inordinate amount of demonstrations - which is why it was deprecated in 0.11. It's possible to use something like GAIL for online learning as well, though we haven't implemented that feature.
Hi thank you for your response
Yes it's all done on the same machine. Speaking of GAIL would it be possible, if I were to stick with online in editor training, to use a generated file from iteration n (if n is the current iteration number) to power agents in iteration n+1 ?
Yes, that sounds like it should work. But you'll have to either start/stop training manually or rewrite some of the logic to reload the .demo
file periodically.
This issue has been automatically marked as stale because it has not had activity in the last 14 days. It will be closed in the next 14 days if no further activity occurs. Thank you for your contributions.
This issue has been automatically closed because it has not had activity in the last 28 days. If this issue is still valid, please ping a maintainer. Thank you for your contributions.
This thread has been automatically locked since there has not been any recent activity after it was closed. Please open a new issue for related bugs.
Hi
I'm developing a game where I'd like to have agents that learn from the player's behavior, and the player can see how his actions influence the agents. I've seen that behavioral cloning was depreciated in the newest release of ml-agents and I've seen no sign of any online training in the documentation so I'm wondering if there's a way to achieve what I want with the tools in 0.11 or if I should use 0.10 and the behavioral cloning ?
Thank you in advance