Closed availabeuser closed 6 months ago
Hey, I assume you already found what was the problem?
I just deleted the previous question because I thought I had solved the problem, but now I find it still exists. I would like to ask why in the final test of the training model, only the pre-processed data set can be tested properly, but the test data set does not meet the requirements and cannot be tested.Is there some necessary preprocessing of the test dataset required? What are the specific pretreatment requirements associated with it? ![Uploading 微信图片.png…]()
During the data preprocessing, both train and val could be processed normally, but test was all filtered because it did not meet the parameter requirements, and the test dataset was empty after the processing. That's why I tested the model directly with the original test dataset.
If you are using the default config, model expects at least 30 points for evaluation and 5 for input (thats 35 in total). These trajectories seem to be too short. Is the test se preprocessed properly?
I see that I only evaluated on the validation set at the end. Is there a benchmark server for test set (and you only have inputs)? I haven't worked on this problem for a while.
I saw that you evaluated the test data set in your paper and was wondering how you preprocessed the test data set. Because when I preprocessed the test dataset as required, I all showed ranges that did not match the yaml file Settings.How should test be processed to facilitate model evaluation?
Are there additional paired yaml files or py files when preprocessing the test data set? How do you preprocess the test data set?
Results in the paper are on the validation dataset (maybe test results are mentioned from the original paper). For evaluation on test set you will need to generate inference files and submit it on Argoverse competition server. This can't be directly done without some additional evaluation. You can still evaluate on the validation set.
How to generate inference file? I've evaluated val, but I need to evaluate test as well, or I won't be able to compare it to other models. Is there a convenient way to evaluate the test? I really need it.
Evaluation on test set is not supported since I never intended to compete. There is no quick way.
You need to implement inference script to generate outputs for Argoverse test server. Evaluation script code can be useful.
Thank you for your timely reply
这是来自QQ邮箱的假期自动回复邮件。 您好,我最近正在休假中,无法亲自回复您的邮件。我将在假期结束后,尽快给您回复。