quancore / social-lstm

Social LSTM implementation in PyTorch
410 stars 248 forks source link

Prediction not autoregressive? #2

Closed Binbose closed 5 years ago

Binbose commented 6 years ago

During sampling of future trajectories, it seems like the the ground truth is fed back into the network, not the predicted point, so the network is not autoregressive. Is that correct? Below is the code line I am referring to.

https://github.com/quancore/social-lstm/blob/56a9b72607a2fb4c7fa16dfe6a9032411fcab13d/helper.py#L459

quancore commented 5 years ago

For the validation sampling, yes this is the case. I was using the ground truth for prediction because of availability. However, you can change autoregressive manner as well.

Binbose commented 5 years ago

If I change it to a autoregressive behavior (basically just exchanging x_seq with ret_x_seq), the output becomes very unstable though and doesn't really makes sense anymore (training and running everything with your default settings). Were you able to achieve good results in an autoregressive mode?

quancore commented 5 years ago

It should work as well but worse than the prediction from ground truth. If the model is simple and unable to predict accurately, it can accumulate the error and give a shitty sequence. But eventually (consecutive epochs), it should give better sequences because of increasing accuracy.

As you can see here: https://github.com/quancore/social-lstm/blob/56a9b72607a2fb4c7fa16dfe6a9032411fcab13d/test.py#L314 In the test script, we are using autoregression for prediction of an unobserved part and it works. The process is very similar to it.

Binbose commented 5 years ago

Hm, I trained it with the default settings (batch size 5 epochs 30), and it seems to not really work. Are the number of epochs in the default setting sufficient for you to get good results, or do I have to increase them?

quancore commented 5 years ago

I am unable to reproduce the training session and I cannot decide to be working or not. However, default parameters should work for a good training. If not you can increase it.