Closed visuallization closed 1 year ago
As we allow for different backends it depends. Rllib for example uses yaml files, sample factory and clean RL command line args. I have not exposed many command line arguments for stable baselines 3 yet, but it should be able to modify the example. Is there anything in particular you would you help configuring?
As we allow for different backends it depends. Rllib for example uses yaml files, sample factory and clean RL command line args. I have not exposed many command line arguments for stable baselines 3 yet, but it should be able to modify the example. Is there anything in particular you would you help configuring?
I have added more arguments for sb3 in my fork but I have no idea how to do a proper pull request for it. The features I added are:
You can check it here: https://github.com/ryash072007/godot_rl_agents_forked/blob/main/godot_rl/backend/sb3.py
Hi guys,
Thanks for the quick answers!
So I forgot to mention that I am currently referring to the godot 3.5 branch. It is using ray's RLlib as rl library and the corresponding yaml files if I am not mistaken? I am not sure which rl lirbrary unity uses under the hood but it seems it just uses pytorch and their own custom configs, can you confirm? What I would have wanted is a way to easily translate the yaml files from unity to godot 3.5 rl agents.
So if I would have the following unity ml-agents config yaml file:
behaviors:
SimpleCollector:
trainer_type: ppo
hyperparameters:
batch_size: 128
buffer_size: 2048
learning_rate: 0.0003
beta: 0.005
epsilon: 0.2
lambd: 0.95
num_epoch: 3
learning_rate_schedule: linear
network_settings:
normalize: false
hidden_units: 256
num_layers: 2
vis_encode_type: simple
reward_signals:
extrinsic:
gamma: 0.99
strength: 1.0
keep_checkpoints: 5
max_steps: 5000000
time_horizon: 128
summary_freq: 20000
threaded: true
I would like to translate it to godot 3.5 rays yaml config to have the same settings for both environments to easily compare the 2.
Hi there,
I was wondering if there is a way to easily translate a unity ml training config to a godot rl training config or is the current best option to do this manually?
Cheers