wayveai / mile

PyTorch code for the paper "Model-Based Imitation Learning for Urban Driving".
MIT License
330 stars 31 forks source link

Confused about ProbabilisticLoss #31

Open wzn0828 opened 9 months ago

wzn0828 commented 9 months ago

image

In the mile.losses.ProbabilisticLoss, I know the first posterior distribution should close to the N(0,I), but the code may be confusing for me. Like the screenshotthe posterior_log_sigma has been cut off the first element, why is the first element still selected in the line of first_kl ?

anthonyhu commented 8 months ago

Hello! Empirically, we found that putting no constraint on the first distribution (e.g. making it match a standard normal distribution) slightly improves results. That's why there is no loss on the first posterior distribution.