Closed JT-Sun closed 1 month ago
For the mini_val there have predicted_map in data,but for the train,there don‘t have。
hello,i have a question and want to discuss with you. Why we need to use predicted_map when training HiVT, shouldn't we use gt_map?
hello,i have a question and want to discuss with you. Why we need to use predicted_map when training HiVT, shouldn't we use gt_map?
In my opinion, since HiVT is only a trajectory prediction model, and predicted_map is needed for HiVT training, you can refer to the paper Fig.2
thank you i understand your meanings. But have you solve out your problem? Since we need predicted_map when training, why it can't be generated in adaptor.py.(only mini_train can manage doing it, fail in train actually)
i've been struggling with this for a long time...
i've been struggling with this for a long time...
Me too T_T,mini_val and full_val can generate, but full train can not. I guess it was given to Trajdata pickle files: traj_scene_frame_full_train.pkl and Ground truth files: gt_full_train.pickle Is there a difference between these pkl and mapping_results.pickle ?
Bro, I think there is still a misunderstanding. Figure 2 represents what happens when the entire process runs smoothly, and the model is completed training and then inputs the map. However, this is not the process we follow during training.
For debugging, can you maybe retrieve a sample token that corresponds to a pissing 'predicted_map', and then go back to mapping_results.pkl to see if there is any predicted map information corresponding to that sample token? As a reference, if things go smoothly this will generate a total of 15191 scenario files
And it is correct that predicted map is needed for downstream training
For debugging, can you maybe retrieve a sample token that corresponds to a pissing 'predicted_map', and then go back to mapping_results.pkl to see if there is any predicted map information corresponding to that sample token? As a reference, if things go smoothly this will generate a total of 15191 scenario files
I find that the full_val and mini_val can retrieve sample token from mapping_results.pkl to traj_scene_xx.pkl and gt_full_val/mini.pickle, but for gt_full_train.pickle their sample token is all different with mapping_results.pkl.
To test this, I wrote a code that reads pkl and extracts the corresponding Sample tokens: Trajdata pickle files: traj_sceneframe{full_train, full_val, mini_train, minival}.pkl Ground truth files: gt{full_train, full_val, mini_val}.pickle And mapping_results.pickle contain 6019
Merging Map Estimation: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 6019/6019 [00:00<00:00, 1508095.33it/s] Sample tokens exist in gt_mini_val keys: 81 Sample tokens do not exist in gt_mini_val keys: 5938 this is for mini -- val
Merging Map Estimation: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 6019/6019 [00:00<00:00, 1496917.63it/s] Sample tokens exist in gt_full_val keys: 6019 Sample tokens do not exist in gt_full_val keys: 0 this is for full -- val
Merging Map Estimation: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 6019/6019 [00:00<00:00, 3017272.11it/s] Sample tokens exist in gt_full_train keys: 0 Sample tokens do not exist in gt_full_train keys: 6019
this is for full -- train
I might see what the problem is. Please refer to this issue. https://github.com/alfredgu001324/MapUncertaintyPrediction/issues/4
I might see what the problem is. Please refer to this issue. #4
Thank you for your reply and sorry for bothering you many times. I would like to confirm again. According to issue4, you mean that I need to re-train the Map Training in docs for each data set such as MapTR. train twice and eval twice to get two mapping_results.pkl?
No, it's actually train once and eval at least twice. Once on train and once on val
No, it's actually train once and eval at least twice. Once on train and once on val
oooo,I'm sorry that I misunderstood, so I need to eval each data set twice in the Map Training stage, I only eval once before, thank you for your answer, I will try it
No, it's actually train once and eval at least twice. Once on train and once on val
sorry, do I need to change anything about MapTR test.py when I eval each dataset twice during the Map Training phase? I got the same result both times: 6019
Not on the test.py. Just edit the config file's evaluation path to evaluate on the training set as well.
oo,I see what you mean. Thank you very much for your help! Sorry to bother you so many times. Thank you again! best wishes
No problem!
I ran the code for Trajectory Prediction Models Training of HiVT, but there are some bug find that. ../MapUncertaintyPrediction-main/HiVT_modified/datasets/nuscenes_dataset.py", line 140, in process_nuscenes data['predicted_map'], KeyError: 'predicted_map'
I print the data find that there no predicted_map dict_keys(['dt', 'agent_type', 'agent_hist', 'agent_fut', 'ego_type', 'ego_pos', 'ego_heading', 'ego_hist', 'ego_fut', 'map_name', 'gt_map', 'scene_name', 'sample_token', 'maptr_gt_map']) What's the problem about this? please help me.