Closed beyond96 closed 4 years ago
Hi, Just finished data preprocessing using provided .m file.
It took me about 36 hours to finish the preprocessing with 6 best i9 workers.
And finally I got three .mat files as expected. the size of these .mat files are: TestSet.mat - 27.3MB TrainSet.mat - 107.6MB ValSet.mat - 15.6MB
Thanks for your answer! I have another question, how to get the predicted trajectory distribution of vehicles from your program?In other words, which variable in the program records the predicted trajectory distribution?fut_pred? Many thanks!
@beyond96 how much time did it took for complete preprocessing of the data?? and did you implemented the paper using authors code?? were the results obatained same as they were reported in the paper??
Regards kshtiij
Can somebody share the resulting preprocessed data? Many thanks!
@Xiaoyu006 could you kindly share the resulting files?
The preprocessing is done in an extremely inefficient way. I made a pull request and in case it is not accepted you can grab the improved code here https://github.com/thelastpolaris/conv-social-pooling/blob/fix_preprocessing/preprocess_data.m
With this script preprocessing can be done in 20 minutes
@thelastpolaris Thanks for your work! Could you please add more comments to you matlab code? I am new to matlab.
Regards.
@Xiaoyu006 the only difference is that on each iteration instead of going through the whole dataset looking for vehtraj that corresponds to vehicle id of a current frame and doing the same thing for trajectories that correspond to the current timeframe I precompute these trajectories into two dictionaries (Maps). Then you don't need to go through the whole dataset but just through vehTrajs and vehTimes
@thelastpolaris Great idea! And thanks for your timely reply.
Thanks @thelastpolaris! Running your script right now, will merge.
Can somebody share the raw txt data? Thanks.
The program with 6 workers has been running for 40 hours and still hasn't finished. What's wrong?