and I was wongering, why are we picking the best samples that minimize the sum of the errors in the sequence, and not picking the sample with smallest error per trajectory? Something along the lines of:
Example of results, using the pretrained SGAN-20V-20 models that the authors offered (compare with this):
Model
ADE12
FDE12
ETH
0.62
1.10
Hotel
0.37
0.79
Univ
0.30
0.55
Zara1
0.21
0.39
Zara2
0.19
0.36
While the errors become smaller, I suppose it would be an even more unfair comparison with methods that are deterministic, as other issues have pointed out - #8
Lemme know if you've thought about this as well, or if you spot any issues.
I was seeing the code for picking the best error per sequence (scripts/evaluate_model.py):
and I was wongering, why are we picking the best samples that minimize the sum of the errors in the sequence, and not picking the sample with smallest error per trajectory? Something along the lines of:
Example of results, using the pretrained SGAN-20V-20 models that the authors offered (compare with this):
While the errors become smaller, I suppose it would be an even more unfair comparison with methods that are deterministic, as other issues have pointed out - #8
Lemme know if you've thought about this as well, or if you spot any issues.