utiasDSL / gym-pybullet-drones

PyBullet Gymnasium environments for single and multi-agent reinforcement learning of quadcopter control
https://utiasDSL.github.io/gym-pybullet-drones/
MIT License
1.21k stars 351 forks source link

learn.py, expected performance, steps, and hardware? #177

Open MatthewCWeston opened 11 months ago

MatthewCWeston commented 11 months ago

Hello. I'm attempting to run learn.py on the hover test environment, and wondering if anyone has had any luck with this so far.

I admittedly haven't tried 1E12 training steps quite yet, but after 1E6 steps, my reward graph looks like this:

download

For reference, a dummy policy that always returns the vector [.1,.1,.1,.1] achieves a reward of roughly -450.

In practice, a typical evaluation run with this model looks like the path shown below:

image

I've tried both the standard, un-commented script, and the commented script adapted for the current versions of both this repository and SB3, and seen similar results. Does it simply require more timesteps, or more parallel CPUs/GPUs? It would be very helpful (and much appreciated) if someone could share the hardware configuration and loss curve associated with a successful run.

abdul-mannan-khan commented 11 months ago

Same thing. I ran it for 20,000,000 steps. Still no success. I used the PPO algorithm. Just one question, @MatthewCWeston, how did you get this reward function? I am not able to get it? The previous functions (from the paper branch) are throwing many errors.

JacopoPan commented 9 months ago

See #180 and the current main