quancore / social-lstm

Social LSTM implementation in PyTorch
423 stars 254 forks source link

In training stage, the time positions of input/output seq. are the same? #11

Closed RowNine closed 5 years ago

RowNine commented 5 years ago

I noticed that the loss computing function in training stage is called like this: 'loss = Gaussian2DLikelihood(outputs, x_seq, PedsList_seq, lookup_seq)'. Loss is calculated on the same time position of 'outputs' and 'x_seq'. It confused me, because for prediction task the output , at time t, should be corresponding to the input, at time t+1. I also noticed in another implementation 'https://github.com/kabraxis/Social-LSTM-VehicleTrajectory', their loss computing function is called like this: 'loss = Gaussian2DLikelihood(outputs, nodes[1:], nodesPresent[1:], args.pred_length)'. Which one is correct?

RowNine commented 5 years ago

And why in validation stage the loos computing function is called like this:‘loss = Gaussian2DLikelihood(outputs, x_seq[1:], PedsList_seq[1:], lookup_seq)’

zhaone commented 4 years ago

It also confused me...In addition, I can't understand why https://github.com/quancore/social-lstm/blob/9fa3007aa8bdcfc57596699f4fe4645d44b40d05/utils.py#L416 seq_target_frame_data (which I think is the label) is only one frame later than seq_source_frame_data (which I think is the train sample). And in train.py a cmd line arg called pred_length representing the prediction time only appears one time at the definition line and never been used later. I don't know without this para, how can this model work. Did I see the wrong brunch?