Open farbod1277 opened 3 years ago
Hello @farbod1277,
To modify _computeReward
is as simple as re-implementing _computeReward
(a function returning a scalar) in any (non-abstract) Aviary class/environment—when you have figured out how to learn the "correct" reward, that is the function to implement. However, to do so for IRL (or imitation learning), I'd imagine you want to find a way to collect trajectories/state(obs)-action pairs for a known, working controller.
For that, I'd start from script fly.py
in which the actions for N quadcopters in aviary CtrlAviary (whose reward is not-implemented, see line 168) are picked by PID controller DSLPIDControl:
https://github.com/utiasDSL/gym-pybullet-drones/blob/ed9aaa54c65ac1836be2b292b2eec6c64b622fc0/examples/fly.py#L152-L158
Hi @JacopoPan ,
Thank you very much for your response. Yes I was thinking of using an xbox controller or arrow keys to fly the drone manually using the PID controller and save those as my expert trajectories, then use imitation learning. Would that be at all possible with this setup? I would need to write the pipeline to interface the simulator with a controller somehow. Any ideas?
Thanks again!!
In fly.py
, the PID controllers track positions in x,y,z, written in matrix of shape (waypoints,3), TARGET_POS
:
https://github.com/utiasDSL/gym-pybullet-drones/blob/ed9aaa54c65ac1836be2b292b2eec6c64b622fc0/examples/fly.py#L68-L69
I'd proceed by steps, (i) first write a separate program that translates your controller inputs into such format, (ii) plug these new trajectories in fly.py
, an (iii) try to do in online, writing the desired waypoints from the external input as they are tracked.
Thanks again @JacopoPan. How about if I wanted to control the drone directly with attitude and thrust control instead of way-points?
fly.py
uses computeControlFromState
https://github.com/utiasDSL/gym-pybullet-drones/blob/ed9aaa54c65ac1836be2b292b2eec6c64b622fc0/examples/fly.py#L153
which simply wraps around ComputeControl
https://github.com/utiasDSL/gym-pybullet-drones/blob/ed9aaa54c65ac1836be2b292b2eec6c64b622fc0/gym_pybullet_drones/control/BaseControl.py#L58
https://github.com/utiasDSL/gym-pybullet-drones/blob/ed9aaa54c65ac1836be2b292b2eec6c64b622fc0/gym_pybullet_drones/control/BaseControl.py#L87-L96
ComputeControl
computes thrust and desired roll/pitch/yaw from the waypoint (using the position control method) and feeds those into the attitude control to obtain the desired motor RPMs
https://github.com/utiasDSL/gym-pybullet-drones/blob/ed9aaa54c65ac1836be2b292b2eec6c64b622fc0/gym_pybullet_drones/control/DSLPIDControl.py#L119-L134
I suppose you'd want to talk directly to this latter method.
@JacopoPan thanks again! Just wondering, if I was to control the drone using positional control using ComputeControl
how would I retrieve the following intermediary variables? Thrust in the z direction, and torques in the x,y and z directions. I would wanna retrieve these intermediary variables from ComputeControl
and use them with the actionSpace
of a DynAviary
to control a drone, instead of doing it directly with RPM. This way I can record trajectories where the actionSpace
is Thrusts and torques instead of RPMs. I hope that makes sense. Any help is appreciated.
What I'm thinking of doing right now is to retrieve the aforementioned variables from inside of _dslPIDAttitudeControl
, the variables being thrust
and target_torques
. And then feeding these into the .step()
method of a DynAviary
. Would that be correct?
I never tried this but it potentially makes sense to me. DynAviary
still converts the target thrust/torques into RPM solving a non-negative least squares problem (but I admit it is probably the least tested of the environments).
I tried it and I've got the following issue:
[WARNING] iter 1205 in utils.nnlsRPM(), unfeasible pitch torque 3200.00 outside range [-0.01, 0.01]
[WARNING] iter 1205 in utils.nnlsRPM(), unfeasible squared rotor speeds, using NNLS
Negative sq. rotor speeds: [-90183354888115.02, 90183773559175.77, 90183773559175.77, -90183354888115.02] Normalized: [-0.50, 0.50, 0.50, -0.50]
NNLS: [0.00, 60122655263137.41, 60122655263137.43, 0.00] Normalized: [0.00, 0.71, 0.71, 0.00] Residual: 208269536884307.03
[WARNING] iter 1210 in utils.nnlsRPM(), unfeasible thrust 1429.12 outside range [0, 0.60]
[WARNING] iter 1210 in utils.nnlsRPM(), unfeasible roll torque 3200.00 outside range [-0.01, 0.01]
[WARNING] iter 1210 in utils.nnlsRPM(), unfeasible pitch torque 3200.00 outside range [-0.01, 0.01]
[WARNING] iter 1210 in utils.nnlsRPM(), unfeasible yaw torque -3200.00 outside range [-0.01, 0.01]
[WARNING] iter 1210 in utils.nnlsRPM(), unfeasible squared rotor speeds, using NNLS
Negative sq. rotor speeds: [101886302823643.95, 80742096258340.28, 101886302823643.94, -279992160636241.31] Normalized: [0.31, 0.25, 0.31, -0.86]
NNLS: [101886302823643.94, 0.00, 101886302823643.94, 0.00] Normalized: [0.71, 0.00, 0.71, 0.00] Residual: 457744885724003.88
[WARNING] iter 1215 in utils.nnlsRPM(), unfeasible roll torque -3200.00 outside range [-0.01, 0.01]
[WARNING] iter 1215 in utils.nnlsRPM(), unfeasible pitch torque -3200.00 outside range [-0.01, 0.01]
[WARNING] iter 1215 in utils.nnlsRPM(), unfeasible yaw torque 3200.00 outside range [-0.01, 0.01]
[WARNING] iter 1215 in utils.nnlsRPM(), unfeasible squared rotor speeds, using NNLS
Negative sq. rotor speeds: [-100755667506297.23, -79611460940993.56, -100755667506297.22, 281122795953588.00] Normalized: [-0.31, -0.24, -0.31, 0.86]
NNLS: [0.00, 0.00, 0.00, 254585642306590.12] Normalized: [0.00, 0.00, 0.00, 1.00] Residual: 313233026428473.94
[WARNING] iter 1220 in utils.nnlsRPM(), unfeasible roll torque -3200.00 outside range [-0.01, 0.01]
[WARNING] iter 1220 in utils.nnlsRPM(), unfeasible pitch torque -3200.00 outside range [-0.01, 0.01]
[WARNING] iter 1220 in utils.nnlsRPM(), unfeasible yaw torque 3200.00 outside range [-0.01, 0.01]
[WARNING] iter 1220 in utils.nnlsRPM(), unfeasible squared rotor speeds, using NNLS
Negative sq. rotor speeds: [-100755667506297.23, -79611460940993.56, -100755667506297.22, 281122795953588.00] Normalized: [-0.31, -0.24, -0.31, 0.86]
NNLS: [0.00, 0.00, 0.00, 254585642306590.12] Normalized: [0.00, 0.00, 0.00, 1.00] Residual: 313233026428473.94
/home/farbod/.local/lib/python3.8/site-packages/numpy/core/numeric.py:1653: RuntimeWarning: overflow encountered in multiply
multiply(a1, b2, out=cp0)
/home/farbod/.local/lib/python3.8/site-packages/numpy/core/numeric.py:1654: RuntimeWarning: overflow encountered in multiply
tmp = array(a2 * b1)
/home/farbod/.local/lib/python3.8/site-packages/numpy/core/numeric.py:1655: RuntimeWarning: invalid value encountered in subtract
cp0 -= tmp
/home/farbod/.local/lib/python3.8/site-packages/numpy/core/numeric.py:1656: RuntimeWarning: overflow encountered in multiply
multiply(a2, b0, out=cp1)
/home/farbod/.local/lib/python3.8/site-packages/numpy/core/numeric.py:1657: RuntimeWarning: overflow encountered in multiply
multiply(a0, b2, out=tmp)
/home/farbod/.local/lib/python3.8/site-packages/numpy/core/numeric.py:1658: RuntimeWarning: invalid value encountered in subtract
cp1 -= tmp
/home/farbod/.local/lib/python3.8/site-packages/numpy/core/numeric.py:1659: RuntimeWarning: overflow encountered in multiply
multiply(a0, b1, out=cp2)
/home/farbod/.local/lib/python3.8/site-packages/numpy/core/numeric.py:1660: RuntimeWarning: overflow encountered in multiply
multiply(a1, b0, out=tmp)
/home/farbod/.local/lib/python3.8/site-packages/numpy/core/numeric.py:1661: RuntimeWarning: invalid value encountered in subtract
cp2 -= tmp
/home/farbod/git/gym-pybullet-drones/gym_pybullet_drones/control/DSLPIDControlDyn.py:194: UserWarning: Gimbal lock detected. Setting third angle to zero since it is not possible to uniquely determine all angles.
target_euler = (Rotation.from_matrix(target_rotation)).as_euler('XYZ', degrees=False)
Traceback (most recent call last):
File "expert_trajectories.py", line 135, in
action[str(j)], _, _ = ctrl[j].computeControlFromState(control_timestep=CTRL_EVERY_N_STEPS*env.TIMESTEP,
File "/home/farbod/git/gym-pybullet-drones/gym_pybullet_drones/control/BaseControl.py", line 87, in computeControlFromState
return self.computeControl(control_timestep=control_timestep,
File "/home/farbod/git/gym-pybullet-drones/gym_pybullet_drones/control/DSLPIDControlDyn.py", line 127, in computeControl
torques = self._dslPIDAttitudeControl(control_timestep,
File "/home/farbod/git/gym-pybullet-drones/gym_pybullet_drones/control/DSLPIDControlDyn.py", line 233, in _dslPIDAttitudeControl
target_rotation = (Rotation.from_quat([w, x, y, z])).as_matrix()
File "rotation.pyx", line 624, in scipy.spatial.transform.rotation.Rotation.from_quat
File "rotation.pyx", line 528, in scipy.spatial.transform.rotation.Rotation.__init__
ValueError: Found zero norm quaternions in `quat`.
The control works fine when the drone is not required to do any rotations, i.e. going straight up. As soon as it is required to rotate to move in the x and y axis I get the following error in _dslPIDAttitudeControl
in my class DSLPIDControlDyn
which is the same as DSLPIDControl
, the only difference being it returns thrusts and torques as its control output instead of RPMs which are then fed into a DynAviary
env instead of a CtrlAviary
env. Any ideas on how to fix this would be appreciated! Cheers
I fixed it by adding a scaling factor for the thrust and the torques after realising that their scales and clamps are completely different to what DynAviary
expects.
I see, I think it can make sense. If you run a few tests and eventually want to create a PR for your modified DSLPIDControlDyn
(I'd recommend to subclass from DSLPIDControl
for as much as possible), contributions are always welcome!
Hi,
Firstly, thanks for putting together such an awesome project!
I've been playing around with the
singleagent.py
problems recently and was wondering if there is any way to incorporate demonstration learning / inverse reinforcement learning into these tasks. Instead of hard-coding the reward functions in_computeReward
, I wanted to try learning a reward function from expert demonstrations. (See: Let's do IRL). I'd appreciate if you could give me some pointers on how to modify the_computeReward
and the learning process to include anIRL_reward
.Cheers