HumanCompatibleAI / imitation

Clean PyTorch implementations of imitation and reward learning algorithms
https://imitation.readthedocs.io/
MIT License
1.33k stars 248 forks source link

SyntheticGatherer often gives nearly deterministic feedback #821

Open timokau opened 12 months ago

timokau commented 12 months ago

Bug description

The current implementation of the SyntheticGatherer in the preference comparisons module often chooses the trajectory with the higher reward nearly deterministically. This is because the Boltzmann-rational policy (or softmax) used for the SyntheticGatherer is very sensitive to the scale of the utilities, and the sum of rewards which are used as utilities tend to be quite large. The gatherer effectively implements this equation for feedback:

$$ P ( A \succ B) = \frac{\exp(\beta R(A))}{\exp(\beta R(A)) + \exp(\beta R(B))} $$

Where $A$ and $B$ are trajectories, $R(A)$ is the return of trajectory $A$ and $\beta$ is the temperature / rationality coefficient. Here are some example values with $\beta = 1$ to illustrate the problem:

R(A) R(B) Difference P(A > B) P(B > A)
1 1 0 0.5 0.5
1 2 1 0.27 0.73
1 3 2 0.12 0.88
1 4 3 0.05 0.95
1 5 4 0.02 0.98
1 7 6 0.0 1.0
1 8 7 0.0 1.0
1 9 8 0.0 1.0
1 10 9 0.0 1.0

As you can see, as soon as the difference in returns exceeds 10 the simulated feedback is nearly deterministic. Note that the probability only depends on the difference, the absolute values of the returns are irrelevant.

To fix this we could either normalize the returns or move away from the Boltzmann-rational model to something like the B-Pref oracle teachers.

ernestum commented 11 months ago

Hi @timokau. Thanks a lot for this hint! We will review the PC implementation in the coming weeks and then this information will be really valuable!