Open 20210726 opened 9 months ago
Thanks for your interesting and detailed post. I will check once I am free.
@Abyssaledge @20210726 Hi, I follow the guide from the link, I met the error when I train the model.
ries. If you would like to use Nvidia GPU with TensorRT, please make sure the missing libraries mentioned above are installed properly.
Traceback (most recent call last):
File "/opt/conda/envs/sst/lib/python3.8/runpy.py", line 194, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/opt/conda/envs/sst/lib/python3.8/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "/root/.vscode-server/extensions/ms-python.python-2023.14.0/pythonFiles/lib/python/debugpy/adapter/../../debugpy/launcher/../../debugpy/__main__.py", line 39, in <module>
cli.main()
File "/root/.vscode-server/extensions/ms-python.python-2023.14.0/pythonFiles/lib/python/debugpy/adapter/../../debugpy/launcher/../../debugpy/../debugpy/server/cli.py", line 430, in main
run()
File "/root/.vscode-server/extensions/ms-python.python-2023.14.0/pythonFiles/lib/python/debugpy/adapter/../../debugpy/launcher/../../debugpy/../debugpy/server/cli.py", line 284, in run_file
runpy.run_path(target, run_name="__main__")
File "/root/.vscode-server/extensions/ms-python.python-2023.14.0/pythonFiles/lib/python/debugpy/_vendored/pydevd/_pydevd_bundle/pydevd_runpy.py", line 321, in run_path
return _run_module_code(code, init_globals, run_name,
File "/root/.vscode-server/extensions/ms-python.python-2023.14.0/pythonFiles/lib/python/debugpy/_vendored/pydevd/_pydevd_bundle/pydevd_runpy.py", line 135, in _run_module_code
_run_code(code, mod_globals, init_globals,
File "/root/.vscode-server/extensions/ms-python.python-2023.14.0/pythonFiles/lib/python/debugpy/_vendored/pydevd/_pydevd_bundle/pydevd_runpy.py", line 124, in _run_code
exec(code, run_globals)
File "/SHFP12/xiaoquan.wang/01_bev/SST/tools/train_track.py", line 231, in <module>
main()
File "/SHFP12/xiaoquan.wang/01_bev/SST/tools/train_track.py", line 221, in main
train_model(
File "/defaultShare/SHFP12/xiaoquan.wang/01_bev/SST/mmdet3d/apis/train.py", line 41, in train_model
train_detector(
File "/opt/conda/envs/sst/lib/python3.8/site-packages/mmdet/apis/train.py", line 170, in train_detector
runner.run(data_loaders, cfg.workflow)
File "/opt/conda/envs/sst/lib/python3.8/site-packages/mmcv/runner/epoch_based_runner.py", line 127, in run
epoch_runner(data_loaders[i], **kwargs)
File "/opt/conda/envs/sst/lib/python3.8/site-packages/mmcv/runner/epoch_based_runner.py", line 47, in train
for i, data_batch in enumerate(self.data_loader):
File "/opt/conda/envs/sst/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 517, in __next__
data = self._next_data()
File "/opt/conda/envs/sst/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 1199, in _next_data
return self._process_data(data)
File "/opt/conda/envs/sst/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 1225, in _process_data
data.reraise()
File "/opt/conda/envs/sst/lib/python3.8/site-packages/torch/_utils.py", line 429, in reraise
raise self.exc_type(msg)
AssertionError: Caught AssertionError in DataLoader worker process 0.
Original Traceback (most recent call last):
File "/opt/conda/envs/sst/lib/python3.8/site-packages/torch/utils/data/_utils/worker.py", line 202, in _worker_loop
data = fetcher.fetch(index)
File "/opt/conda/envs/sst/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py", line 44, in fetch
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/opt/conda/envs/sst/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py", line 44, in <listcomp>
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/opt/conda/envs/sst/lib/python3.8/site-packages/mmdet/datasets/dataset_wrappers.py", line 151, in __getitem__
return self.dataset[idx % self._ori_len]
File "/defaultShare/SHFP12/xiaoquan.wang/01_bev/SST/mmdet3d/datasets/waymo_tracklet_dataset.py", line 284, in __getitem__
data = self.prepare_train_data(idx)
File "/defaultShare/SHFP12/xiaoquan.wang/01_bev/SST/mmdet3d/datasets/waymo_tracklet_dataset.py", line 218, in prepare_train_data
example = transform(example)
File "/defaultShare/SHFP12/xiaoquan.wang/01_bev/SST/mmdet3d/datasets/pipelines/tracklet_pipelines.py", line 156, in __call__
assert len(points_list) == len(tracklet) == len(pose_list)
AssertionError
This is my script.
DIR=ctrl
CONFIG=ctrl_veh_24e_demo
WORK=work_dirs
bash tools/dist_train.sh configs/$DIR/$CONFIG.py 4 --work-dir ./$WORK/$CONFIG/ --no-validate
@20210726 Greate introduction! The only thing I concern is the train_gt.bin
in step2, which should be replaced with xxx.bin
from our own detector when converting format for mot.
@20210726 Truly sorry for the late reply. I quickly go through your introduction. The pipeline is basically right, but one point need to be modified:
If you are generating the training data, you do not need the max_time_since_update: 10
in the tracking config, which will retain many boxes in the generated xx.bin
file and is likely to lead to and oversize error of bin file.
max_time_since_update: 10
should be only adopted in generating training data.
Many thanks to all of you for the discussions!!! @rockywind @SakuraRiven @20210726
@Abyssaledge Are there some mistakes? 《If you are generating the training data, you do not need the xxx
》《xxx
should be only adopted in generating training data》
@SakuraRiven The introduction above uses tracking config immortal_for_ctrl_keep_10.yaml
to generate training data. This config enables max_time_since_update: 10
by default, which means we will keep adding virtual boxes (at most 10) if the tractor loses an object. This is not wrong but it may lead to too many boxes in the training set, greatly slowing the processing.
Thus I recommend disabling max_time_since_update: 10
(set it to 0) when generating the training data.
If you are generating the training data, you do not need the
max_time_since_update: 10
in the tracking config, which will retain many boxes in the generatedxx.bin
file and is likely to lead to and oversize error of bin file.max_time_since_update: 10
should be only adopted in generating training data.
@Abyssaledge I see. So here should be "max_time_since_update: 10
should be only adopted in generating val and test data"?
@Abyssaledge Another question. Do we have to perform extend_tracks.py
for the training data generation? Considering that the tracklet anno in training set does not contain the first 10 frame, maybe it is not necessary?
No, you do not need extend_tracks.py for training. @SakuraRiven
Btw, if you use max_time_since_update
> 0 or extend_tracks.py
, please remember to remove the empty predictions. Otherwise, there might be some false positives, especially for pedestrian and cyclist.
Thanks for your interesting and detailed post. I will check once I am free.
Hi, thanks for your great work!
I have a question when I tried to reproduce the result. I followed steps to generated predicted tracks input in step 3. And in step 4, I used train_gt.bin to assign bbox to the predicted track. But seems like the predicted track from previous steps was already with ego motion, but the object position in train_gt.bin not. So the assignment results was weird (very low Average candidates per trk and very high Tracklet FP rate). I am wondering how to add ego motion into train_gt.bin so that the assignment could be correct or I did something wrong?
Thanks in advance!
@20210726 hi, Which version and slice of waymo data ,and i will reproduce ctrl result by your pipeline.
especially step2,and Is the config file ‘fsd_base_vehicle.yaml’ correct?
1.prepare waymo data(I only use part of waymo dataset) 1.1 use my python script to generate train.txt val.txt test.txt and idx2timestamp.pkl idx2contextname.pkl Then cp train.txt val.txt test.txt to ./data/waymo/kitti_format/ImageSets/ cp idx2timestamp.pkl idx2contextname.pkl to ./data/waymo/kitti_format/ 1.2 python tools/create_data.py --dataset waymo --root-path ./data/waymo/ --out-dir ./data/waymo/ --workers 128 --extra-tag waymo
Step 1: Generate train_gt.bin once for all. (waymo bin format).
python ./tools/ctrl/generate_train_gt_bin.py
generate file 'train_gt.bin'
python ./tools/ctrl/extract_poses.py
Generate file context2timestamp.pkl and pose.pkl
![image](https://github.com/tusen-ai/SST/assets/87969647/acb1b248-3739-4c14-a824-5536467fae15)
Step 2: Use ImmortalTracker to generate tracking results in training split (bin file format) modify file ego_info.py and time_stamp.py like this:
Modify file waymo_convert_detection.sh like this:
then:
bash preparedata/waymo/waymo_preparedata.sh ~/dataset/waymo/waymo_format/
generate files like this :
![image](https://github.com/tusen-ai/SST/assets/87969647/e90d07b5-2c46-4ac9-844d-3c59e1086366)
bash preparedata/waymo/waymo_convert_detection.sh ~/dataset/waymo/waymo_format/train_gt.bin CTRL_FSD_TTA Generate files like this: In data/waymo/training/detection/CTRL_FSD_TTA/dets:
Modify file run_mot.sh like this:
![image](https://github.com/tusen-ai/SST/assets/87969647/dedbe884-cf82-42bc-a7ef-064be887657e)
Then: bash run_mot.sh generate file like this:
Step 3: Generate track input for training
modify file ‘fsd_base_vehicle.yaml’ like this: pred.bin was generated in step 2.
python ./tools/ctrl/generate_track_input.py ./tools/ctrl/data_configs/fsd_base_vehicle.yaml --process 1
generate files like this:
![image](https://github.com/tusen-ai/SST/assets/87969647/58355e7a-86dd-4549-ad5c-b8a026eb1d30)
Step 4: Assign candidates GT tracks python ./tools/ctrl/generate_candidates.py ./tools/ctrl/data_configs/fsd_base_vehicle.yaml --process 1
Originally posted by @20210726 in https://github.com/tusen-ai/SST/issues/132#issuecomment-1688148061