hpi-sam / rl-4-self-repair

Reinforcement Learning Models for Online Learning of Self-Repair and Self-Optimization
MIT License
0 stars 1 forks source link

Two estimators do not improve the tabular algorithms for our environment. #20

Open 2start opened 4 years ago

2start commented 4 years ago

Two estimators are proposed to counteract certain environment setups. Lets assume you have one state that has transitions to n states where each of the n states has a low probability for a large reward. It is really likely to hit the large reward at least once for large n. This leads to picking the transition to the state over and over again because of the epsilon-greedy strategy of q-learning and sarsa and it takes a long time to converge back to the expected value of the q_value.