quancore / social-lstm

Social LSTM implementation in PyTorch
410 stars 248 forks source link

Loss function computation #24

Open ThomasMrY opened 4 years ago

ThomasMrY commented 4 years ago

Hi, I have a question. In training process:"

Forward prop

            outputs, _, _ = net(x_seq, grid_seq, hidden_states, cell_states, PedsList_seq,numPedsList_seq ,dataloader, lookup_seq)

Compute loss

            loss = Gaussian2DLikelihood(outputs, x_seq, PedsList_seq, lookup_seq)"

why the loss is computed by using x_seq and outputs rather than outputs and y_seq? Thanks

quancore commented 4 years ago

x_seq represents the observed part of a trajectory and y_seq is unknown part (will be predicted). During the training, we are using known (observed) part of a trajectory for the training model and calculating training loss.

xxAna commented 4 years ago

@quancore hello. I am also confused of the loss calculation. In that case, do you mean that the model mainly for learning the sequence relationship among the hidden state, so it doesn't matter weather its output was compare with the known (observed) part or unknown part? But will it infect the model performance to predict the unknown situation as it was always trained to predict the known part? Thank you and waiting for your reply.