Open YANG-CY-163 opened 2 weeks ago
We will check the details again. Please wait.
We use the multiscale training as dict(type='RandomScaleImageMultiViewImage', scales=[0.7, 0.75, 0.8]). So the image size is (1920, 1280) x 0.75 .The code is baed on https://github.com/OpenDriveLab/Birds-eye-view-Perception/blob/master/waymo_playground , but we train the waymo dataset by sliding window manner (due to the ICCV2023 deadline) . The training log and config can be found below: 20230228_065532.log.json petr_waymo_mini_r101_v2_6key_resort_0.75_2grad.txt
I got it. Thank you for your reply! It helps a lot!
Thank you for such a great work! I have some questions about the setting of the Waymo dataset, what is the image resolution used for training and testing, is it 1280x1920 like the config in Waymo Playgroud?