Closed ryash072007 closed 1 year ago
To use with rllib:
pip install godot-rl[rllib]
To train, with the BallChase example (for example):
gdrl --trainer=rllib --env_path=examples/godot_rl_BallChase/bin/BallChase.exe
rllib parameters can be modified by changing / copying the ppo_test.yaml file
Windows support and docs is a bit minimal at the moment, so any feedback and contributions towards this would be great.
Ah I just found a bug with the rllib wrapper, I just pushed a change. Please reinstall with:
pip install godot-rl[rllib]
OMG Thank you. I just managed to fix my issues with using sb3 and you replied. Thanks. Which one would be better btw? sb3 or rllib?
It depends on your problem, I think sb3 is simpler and easier to get started with, but rllib has more complex rl algorithms for more challenging environments. I would start with sb3, the default parameters should work well on most problems. Only a few example envs (Ball Chase and FlyBy) are support with sb3, whereas they should all work with rllib.
Where should the ppo_test.yaml be kept? in which folder because I get this error which says it can not locate it.
Also, is there anyway I can make the training process run infinitely until I stop it in sb3?
Also, is there anyway I can make the training process run infinitely until I stop it in sb3?
I figured how to this.
Where should the ppo_test.yaml be kept? in which folder because I get this error which says it can not locate it.
Can you help with this?
gdrl.interactive will run 1000 steps and you can load an example in the editor to see random actions being executed.
The ppo_test.yaml can be kept in the directory where you run your training. Alternatively, you can provide a path with --config_file
Error: No available node types can fulfill resource request {'CPU': 1.0, 'GPU': 1.0}. Add suitable node types to this cluster to resolve this issue.
I keep getting this error and the training fails. Can you provide some information pertaining to this?
Ah, this is related to rllib. What are the specs of your machine, do you have an NVidia GPU? If not try setting the following in the yaml file:
num_gpus: 0
Training will be slower but from simple problems it should be fast enough.
FYI I updated the custom env doc a bit: https://github.com/edbeeching/godot_rl_agents/blob/main/docs/CUSTOM_ENV.md
I think it is still not so clear, would a video where I go through have to add an agent controller to a game be useful? I could try and look at that in the new year.
FYI I updated the custom env doc a bit: https://github.com/edbeeching/godot_rl_agents/blob/main/docs/CUSTOM_ENV.md
I think it is still not so clear, would a video where I go through have to add an agent controller to a game be useful? I could try and look at that in the new year.
yes please, that would be very helpful
SO, I found out that sample-factory is not supported on Windows OS. And rllib is the only backend that successfully installed on my pc. So, how can I use rllib to run the examples provided and make my own RL environments with it.