Unity-Technologies / ml-agents

The Unity Machine Learning Agents Toolkit (ML-Agents) is an open-source project that enables games and simulations to serve as environments for training intelligent agents using deep reinforcement learning and imitation learning.
https://unity.com/products/machine-learning-agents
Other
17.02k stars 4.15k forks source link

Using CPU vs GPU in training with ML-Agents #1246

Closed adam-pociejowski closed 6 years ago

adam-pociejowski commented 6 years ago

Hello, I'm using ML-Agents on Windows 10 my hardware is: CPU: ADM Ryzen 7 1700 Eight-Core Processor RAM: 16GB GPU: NVIDIA GeForce GTX 1060 6GB

I tried using ML-Agents using CPU and GPU I activated CUDA support and installed tensorflow-gpu following this guide: https://github.com/Unity-Technologies/ml-agents/blob/master/docs/Installation-Windows.md

Surprisingly I had better performace using CPU than GPU. I was expecting that using GPU to train would give me better performance than CPU.

To learn i was using script from guide: python learn.py ./pushblock/1 --train --run-id=1

ML-Agents window was not responding for some time after run script using GPU, and training was much slower.

Is there any sense to using GPU instead of CPU to train using ML-Agents? Maybe I did something wrong?

MarcoMeter commented 6 years ago

Hi @adam-pociejowski This PPO implementation is not optimized for the use of a GPU. In general, it is not that easy to optimize Reinforcement Learning for the use of a GPU. So you are better of with a CPU currently.

adam-pociejowski commented 6 years ago

Thanks for fast response! As you said I will use my CPU.

maystroh commented 5 years ago

@MarcoMeter When are you planning to support GPU? I'm wondering because I'm using visual observations which requires GPU capabilities to run the trainings.

shihzy commented 5 years ago

Hi @maystroh - can you clarify your ask for GPU support?

maystroh commented 5 years ago

Sorry my question was not clear enough. I meant by GPU support to have the PPO implementation optimized for GPU since working with visual observation needs GPU more than CPU, especially if we use use a more complex CNN than the 2 layers CNN (the one implemented so far in the library).

ghost commented 5 years ago

When I tried A3C with the same spec. PC as 'adam' except GPU, mine is 1080, CPU was surely faster than GPU. It's clear and natural as A3C uses multi-threads. In windows 10, even GPU version made some garbage values in neural weights and was not trainable.

lock[bot] commented 4 years ago

This thread has been automatically locked since there has not been any recent activity after it was closed. Please open a new issue for related bugs.