Closed zsgj-Xxx closed 2 years ago
Hi! We don't train with L_2 loss. We train with negative log-likelihood loss of Gaussian mixtures to capture the multimodality.
Oh. Sorry for my expression errors. What I use is the Loss in the code. However, my training results are inconsistent with the results in the paper. Among my 6 trajs, there is only one close to gt, others pred are near the original point ![Uploading mmexport1661165075704.png…]()
Do you observe this for all examples?
Yes, I observed nearly 2,000 samples in val, all got similar results, and they all got the best score in the same pred id.
It seems your model is overfitted. We didn't observe that behaviour. Do you train on full training dataset?
no yet, I just used 1/10 training data to verify the effect
Oh I found the problem. I put the gt at the vehicle point instead of the origin in the preprocessing stage. Thank you for your reply.
Best,
Hi! I tried your method and I observed that in training, l2loss and log_softmax have so large difference. so my network does not learn multimodal tracks, only one best track is fitted. Do you have any solution?