keon / deep-q-learning

Minimal Deep Q Learning (DQN & DDQN) implementations in Keras
https://keon.io/deep-q-learning
MIT License
1.29k stars 455 forks source link

Question: Is this some form of reward engineering? #34

Open WorksWellWithOthers opened 3 years ago

WorksWellWithOthers commented 3 years ago

This would break in environments that return the state as more/less than 4 values for unpacking.

  1. If not essential can we just remove this?
  2. If it's essential, would someone explain why and/or reference the paper for this? This seems specific to CartPole. I wasn't sure if the implementation's goal was to only solve CartPole.
r1 = (env.x_threshold - abs(x)) / env.x_threshold - 0.8  
r2 = (env.theta_threshold_radians - abs(theta)) / env.theta_threshold_radians - 0.5  
reward = r1 + r2
scprotz commented 3 years ago

@WorksWellWithOthers This is indeed a form of reward engineering and is specific to CartPole to turn the returned state into a numeric reward. Other environments would not need this specifically, and potentially would return a distinct reward already.