edbeeching / godot_rl_agents

An Open Source package that allows video game creators, AI researchers and hobbyists the opportunity to learn complex behaviors for their Non Player Characters or agents
MIT License
942 stars 69 forks source link

Best way to translate unity-ml-agent configs to godot-rl-agent config #61

Closed visuallization closed 1 year ago

visuallization commented 1 year ago

Hi there,

I was wondering if there is a way to easily translate a unity ml training config to a godot rl training config or is the current best option to do this manually?

Cheers

edbeeching commented 1 year ago

As we allow for different backends it depends. Rllib for example uses yaml files, sample factory and clean RL command line args. I have not exposed many command line arguments for stable baselines 3 yet, but it should be able to modify the example. Is there anything in particular you would you help configuring?

ryash072007 commented 1 year ago

As we allow for different backends it depends. Rllib for example uses yaml files, sample factory and clean RL command line args. I have not exposed many command line arguments for stable baselines 3 yet, but it should be able to modify the example. Is there anything in particular you would you help configuring?

I have added more arguments for sb3 in my fork but I have no idea how to do a proper pull request for it. The features I added are:

  1. Choose whether to save logs or not.
  2. Ability to save the model.
  3. Ability to use the saved model to restart training or to not restart training but make predictions deterministically.
  4. After how many iterations to save the model.
  5. Specify Learning Rate.

You can check it here: https://github.com/ryash072007/godot_rl_agents_forked/blob/main/godot_rl/backend/sb3.py

visuallization commented 1 year ago

Hi guys,

Thanks for the quick answers!

So I forgot to mention that I am currently referring to the godot 3.5 branch. It is using ray's RLlib as rl library and the corresponding yaml files if I am not mistaken? I am not sure which rl lirbrary unity uses under the hood but it seems it just uses pytorch and their own custom configs, can you confirm? What I would have wanted is a way to easily translate the yaml files from unity to godot 3.5 rl agents.

So if I would have the following unity ml-agents config yaml file:

behaviors:
  SimpleCollector:
    trainer_type: ppo
    hyperparameters:
      batch_size: 128
      buffer_size: 2048
      learning_rate: 0.0003
      beta: 0.005
      epsilon: 0.2
      lambd: 0.95
      num_epoch: 3
      learning_rate_schedule: linear
    network_settings:
      normalize: false
      hidden_units: 256
      num_layers: 2
      vis_encode_type: simple
    reward_signals:
      extrinsic:
        gamma: 0.99
        strength: 1.0
    keep_checkpoints: 5
    max_steps: 5000000
    time_horizon: 128
    summary_freq: 20000
    threaded: true

I would like to translate it to godot 3.5 rays yaml config to have the same settings for both environments to easily compare the 2.