Noahhobe / TeamOne2020

1 stars 0 forks source link

Inverse Reinforcement Learning from Failure #1

Open lwill001 opened 4 years ago

lwill001 commented 4 years ago

This article talks about Inverse Reinforcement Learning from Failure (IRLF). Inverse Reinforcement Learning (IRL) is when autonomous agents learn from already correct demonstrations. IRLF allows autonomous agents to learn from incorrect demonstrations, which are often times easier to model. Though the cases are easier to simulate, this paper aims to discover if/how IRLF compares to IRL. In two experiments, the result of the IRLF learns faster and generalizes faster than IRL. In a third experiment, contrasting, overlapping, and complementary "reward" scenarios are all examined in failed and successful datasets.

The article provides good background to IRLF as well as IRL, including some introduction to their algorithms. This would be an article for someone looking to focus on some sort of reinforcement learning.