Unity-Technologies / ml-agents

The Unity Machine Learning Agents Toolkit (ML-Agents) is an open-source project that enables games and simulations to serve as environments for training intelligent agents using deep reinforcement learning and imitation learning.
https://unity.com/products/machine-learning-agents
Other
17.12k stars 4.15k forks source link

return all_trainer_settings[self.trainer_type](), when using trainer_settings = TrainerSettings() #5914

Closed hlwang98 closed 1 year ago

hlwang98 commented 1 year ago

problem

I would like to initiate the environment in Python and use the PPOTrainer in Python to train the agent. However, when running into the line trainer_settings = TrainerSettings() an exception occurs: return all_trainer_settings[self.trainer_type]() with the error message 'KeyError: 'ppo'.' Upon inspection of the all_trainer_settings function, it appears that there is a ToDo placeholder in it and the dictionary is empty.

I am seeking guidance on resolving the issue or receiving instructions on how to implement the PPO algorithm using Python. Since several parameters require configuration in Python before executing PPO, the CLI training method is not suitable for my needs.

code

code show as below
from mlagents_envs.environment import UnityEnvironment
from mlagents.trainers.ppo.trainer import PPOTrainer
from mlagents.trainers.settings import TrainerSettings

trainer_settings = TrainerSettings()

env = UnityEnvironment("RollerAgent/RollerAgent.exe",no_graphics=True)
behavior_names = list(env. behavior_specs. keys())
print(behavior_names)
ppotrainer = PPOTrainer("RollerBall", 10, trainer_settings, True, False, 0, "")
env. reset()
for _ in range(1000):
     decision_steps, terminal_steps = env. get_steps(behavior_names[0])
     ppotrainer. advance()

error message

Traceback (most recent call last):
  File "C:\user\unityCOntroller\test.py", line 11, in <module>
    trainer_settings = TrainerSettings()
  File "<attrs generated init mlagents.trainers.settings.TrainerSettings>", line 6, in __init__
  File "c:\user\mlagents\ml-agents\ml-agents\mlagents\trainers\settings.py", line 622, in _set_default_hyperparameters
    return all_trainer_settings[self.trainer_type]()
KeyError: 'ppo'

some screenshots

image

AzizRourou commented 1 year ago

To setup the training configuration you need to create a .yaml file as explained in docs/Training-ML-Agents.md Maybe this walkthrough can help (September 22, 2021) : https://www.gocoder.one/blog/training-agents-using-ppo-with-unity-ml-agents/ They create a .yaml file in config/ppo as "a trainer configuration file".

Otherwise, I found an old article (2020, with ml-agents release_1) that guides on how to apply PPO algorithms using just the mlagents-envs package. Maybe it can help you find a workaround instead of using /ml-agents/ml-agents/mlagents/trainers/settings.py in the meantime? https://medium.com/analytics-vidhya/ppo-algorithm-with-custom-rl-environment-made-with-unity-engine-effed6d98b9d

miguelalonsojr commented 1 year ago

This is not a bug. If you need assistance, please post your request in the forums: https://forum.unity.com/forums/ml-agents.453/

github-actions[bot] commented 1 year ago

This thread has been automatically locked since there has not been any recent activity after it was closed. Please open a new issue for related bugs.