mbaske / angry-ai

Battle Robots Demo made with Unity Machine Learning Agents
MIT License
126 stars 24 forks source link

Platform On Walker Is Glitching #1

Closed DroneMesh closed 5 years ago

DroneMesh commented 5 years ago

The platform for the walker is glitching pretty severely.

I am working on modifying it to the latest ML-Agents version. Do you know what might be the issue?

mbaske commented 5 years ago

I think my platform setup with configurable joints is unnecessarily complicated and fidgety. Apparently I didn't read the Unity docs carefully enough and missed better ways to control the tiles.

DroneMesh commented 5 years ago

Hey mbaske,

There was no issue with your platform it was because of the modifications I was doing on it to have it run on the latest ML-agents. Everything is working so far and currently training the walking since we can not use the .bytes files anymore. I am planning on creating a pull request once I have modified it to run on the latest ml-agents.

I will also fix the robot-ants pull request once I get time.

I might need some help with the navigation brain and the shoot brain once I get to that stage.

What are the observation spaces for the navigation and shoot brains and how many actions each output?

This will save me some time when I get to that stage.

Keep up the great work.

mbaske commented 5 years ago

Hi DroneMesh, Thanks for taking the time to look into this.

The navigator brain only observes visual input and outputs a single continuous value for the walk direction in 2D/XZ space. The shooter brain observes a proximity value and has one discrete output for shooting / not shooting.

As far as I can tell, the robot-ants repo is already compatible with the current ml-agents release. Only the trained models aren't running on the barracuda inference engine.

DroneMesh commented 5 years ago

Hi mbaske,

Just saw your latest video cant wait to play around with that new environment.

I have just trained Angry AI to walk and navigatation is currently in progress. I had to convert the bytes to pb then import to tensorboard to see the output and input nodes.. 7 hours later and all worked out great ;) ..

However I am noticing something the shooting brain is not doing a great a job. I have a feeling I should train it after navigation brain matures a bit.

Once I have trained all brains to mimic your video's results I will send a pull request with the updates.

You can close this issue..

P.S Keep up the great work

mbaske commented 5 years ago

The original idea was training the brain to aim and shoot based on visual observations. However, I couldn't get that working, especially with an independent brain controlling the walking motion. So for now, the shooting brain ended up being a glorified on/off-switch, relying on a proximity sensor and hardcoded target lock. Please feel free to make any changes that would yield better training results. Thanks!