Closed Miner34dev closed 1 year ago
A there was a refactor, this should be _player.to_local(...)
now I think. Thanks for finding this, if you update the PR I will merge the change.
Nope
Ah ok, thanks. I will take a look in more detail tonight if I find the time.
It worked even without to_local on my computer.
Did the policy converge?
Which policy?
Ah perhaps you are not familiar with RL terminology. The policy is the neural network (running in python) that controls your agent.
By "did the policy converge", I am asking if after training for some steps, did the agent learn a behavior that solves the task?
I recommend our Deep RL course if you want to learn more about Deep RL: https://huggingface.co/learn/deep-rl-course/unit0/introduction
After about 10 minutes they started bouncing it about half the times, but i don't know if it is normal...
This is the result of the training:
----------------------------------------
| time/ | |
| fps | 64 |
| iterations | 782 |
| time_elapsed | 3089 |
| total_timesteps | 200192 |
| train/ | |
| approx_kl | 0.01714884 |
| clip_fraction | 0.214 |
| clip_range | 0.2 |
| entropy_loss | 0.561 |
| explained_variance | 0.00917 |
| learning_rate | 0.0003 |
| loss | 0.00391 |
| n_updates | 7810 |
| policy_gradient_loss | -0.00432 |
| std | 0.138 |
| value_loss | 0.0501 |
----------------------------------------
Is it normal?
The reason to use to_local
is that the observation is far simpler for the AI to understand and learn from. I am sure it will still work in this setting but it may take longer. I don't not have the results from when I last ran this xeample. I have just refactored the plugin so that to_local
should work again. It will take a while to be available on the Godot asset lib, but you can install from source if you would like.
Oh, ok. Then i can close this pull request now.
The
to_local()
function doesn't exist, at least on godot 4 (personally tested)