Closed KirinoMao closed 3 years ago
Dear KirinoMao,
thank you for your interest in our work. What you are referring to in the train_ae.py file is the training part of the autoencoder. Here we take both past and future as input and we reconstruct the future (Fig 3 in the paper https://openaccess.thecvf.com/content_CVPR_2020/papers/Marchetti_MANTRA_Memory_Augmented_Networks_for_Multiple_Trajectory_Prediction_CVPR_2020_paper.pdf)
The model indeed takes a future information even at test time, but this is an information read from the memory module and not the ground truth that we want to predict. The main idea of this work is that we can exploit different futures observed at training time to condition the decoding process.
Hope this was helpful
Federico
Dear Federico,
Thank you for replying quickly!
It turns out that I was confused by the training and testing process of MANTRA. And with your explanation and the MANTRA paper , the overall process is clear to me.
Thanks again for your your timely help!
KirinoMao
It seems that even in the evaluation phase, the future information is passed to the network.
file: _trainer/trainerae.py line: 213 phase: evaluate
pred = self.mem_n2n(past, future).data
file: _models/modelencdec.py line: 94
state_conc = torch.cat((state_past, state_fut), 2)