Open JingweiZhang12 opened 1 year ago
np.random.uniform
based on other published papers and haven't tried np.random.normal
. I don't think this will cause a big difference, but it could be worthwhile to try.some sonfused... Do these data augmenting strategies ensure consistency between sequential frames? How exactly is the copy-paste strategy designed between sequential frames?
some sonfused... Do these data augmenting strategies ensure consistency between sequential frames? How exactly is the copy-paste strategy designed between sequential frames?
The pasted object will be added to all frames in the same way (same location and augmentation noises etc.). I just assume it is a static object in the scene. https://github.com/TuSimple/centerformer/blob/5a949b88ed7bb15aafb39bf78c95f1452063ebea/det3d/datasets/pipelines/preprocess_multiframe.py#L136-L141
some sonfused... Do these data augmenting strategies ensure consistency between sequential frames? How exactly is the copy-paste strategy designed between sequential frames?
The pasted object will be added to all frames in the same way (same location and augmentation noises etc.). I just assume it is a static object in the scene.
I see, but if the network has velocity prediction branch, the static object assumption may confused the network, or maybe you have already set the velocity of the paste objects in gt_target to 0? By the way, why not use the velocity of the obejct label to figure out where the obejct is in the history frame and paste on it? Is it worth a try?
Interesting work! The translation of data aug in CenterForm is 0.5, https://github.com/TuSimple/centerformer/blob/master/configs/waymo/voxelnet/waymo_centerformer.py#L132, while the translation in CenterPoint is 0. Also, I noticed that you used the
np.random.uniform
rather thannp.random.normal
like rotation and scale parameters. Could you explain the motivation of these modification and performance influence about them?