edbeeching / godot_rl_agents

An Open Source package that allows video game creators, AI researchers and hobbyists the opportunity to learn complex behaviors for their Non Player Characters or agents
MIT License
942 stars 69 forks source link

How do I use rllib for the examples provided? #28

Closed ryash072007 closed 1 year ago

ryash072007 commented 1 year ago

SO, I found out that sample-factory is not supported on Windows OS. And rllib is the only backend that successfully installed on my pc. So, how can I use rllib to run the examples provided and make my own RL environments with it.

edbeeching commented 1 year ago

To use with rllib:

pip install godot-rl[rllib]

To train, with the BallChase example (for example):

gdrl --trainer=rllib --env_path=examples/godot_rl_BallChase/bin/BallChase.exe

rllib parameters can be modified by changing / copying the ppo_test.yaml file

Windows support and docs is a bit minimal at the moment, so any feedback and contributions towards this would be great.

edbeeching commented 1 year ago

Ah I just found a bug with the rllib wrapper, I just pushed a change. Please reinstall with:

pip install godot-rl[rllib]
ryash072007 commented 1 year ago

OMG Thank you. I just managed to fix my issues with using sb3 and you replied. Thanks. Which one would be better btw? sb3 or rllib?

edbeeching commented 1 year ago

It depends on your problem, I think sb3 is simpler and easier to get started with, but rllib has more complex rl algorithms for more challenging environments. I would start with sb3, the default parameters should work well on most problems. Only a few example envs (Ball Chase and FlyBy) are support with sb3, whereas they should all work with rllib.

ryash072007 commented 1 year ago

Where should the ppo_test.yaml be kept? in which folder because I get this error which says it can not locate it.

ryash072007 commented 1 year ago

Also, is there anyway I can make the training process run infinitely until I stop it in sb3?

ryash072007 commented 1 year ago

Also, is there anyway I can make the training process run infinitely until I stop it in sb3?

I figured how to this.

ryash072007 commented 1 year ago

Where should the ppo_test.yaml be kept? in which folder because I get this error which says it can not locate it.

Can you help with this?

edbeeching commented 1 year ago

gdrl.interactive will run 1000 steps and you can load an example in the editor to see random actions being executed.

The ppo_test.yaml can be kept in the directory where you run your training. Alternatively, you can provide a path with --config_file

ryash072007 commented 1 year ago

Error: No available node types can fulfill resource request {'CPU': 1.0, 'GPU': 1.0}. Add suitable node types to this cluster to resolve this issue. I keep getting this error and the training fails. Can you provide some information pertaining to this?

edbeeching commented 1 year ago

Ah, this is related to rllib. What are the specs of your machine, do you have an NVidia GPU? If not try setting the following in the yaml file:

num_gpus: 0

Training will be slower but from simple problems it should be fast enough.

edbeeching commented 1 year ago

FYI I updated the custom env doc a bit: https://github.com/edbeeching/godot_rl_agents/blob/main/docs/CUSTOM_ENV.md

I think it is still not so clear, would a video where I go through have to add an agent controller to a game be useful? I could try and look at that in the new year.

ryash072007 commented 1 year ago

FYI I updated the custom env doc a bit: https://github.com/edbeeching/godot_rl_agents/blob/main/docs/CUSTOM_ENV.md

I think it is still not so clear, would a video where I go through have to add an agent controller to a game be useful? I could try and look at that in the new year.

yes please, that would be very helpful