Closed hlwang98 closed 1 year ago
To setup the training configuration you need to create a .yaml file as explained in docs/Training-ML-Agents.md Maybe this walkthrough can help (September 22, 2021) : https://www.gocoder.one/blog/training-agents-using-ppo-with-unity-ml-agents/ They create a .yaml file in config/ppo as "a trainer configuration file".
Otherwise, I found an old article (2020, with ml-agents release_1) that guides on how to apply PPO algorithms using just the mlagents-envs package. Maybe it can help you find a workaround instead of using /ml-agents/ml-agents/mlagents/trainers/settings.py in the meantime? https://medium.com/analytics-vidhya/ppo-algorithm-with-custom-rl-environment-made-with-unity-engine-effed6d98b9d
This is not a bug. If you need assistance, please post your request in the forums: https://forum.unity.com/forums/ml-agents.453/
This thread has been automatically locked since there has not been any recent activity after it was closed. Please open a new issue for related bugs.
problem
I would like to initiate the environment in Python and use the PPOTrainer in Python to train the agent. However, when running into the line trainer_settings = TrainerSettings() an exception occurs: return all_trainer_settings[self.trainer_type]() with the error message 'KeyError: 'ppo'.' Upon inspection of the all_trainer_settings function, it appears that there is a ToDo placeholder in it and the dictionary is empty.
I am seeking guidance on resolving the issue or receiving instructions on how to implement the PPO algorithm using Python. Since several parameters require configuration in Python before executing PPO, the CLI training method is not suitable for my needs.
code
error message
some screenshots