V2AI / Det3D

World's first general purpose 3D object detection codebse.
https://arxiv.org/abs/1908.09492
Apache License 2.0
1.48k stars 299 forks source link

Missing CBGS real augmentation #110

Closed MeyLavie closed 4 years ago

MeyLavie commented 4 years ago

In your paper (CBGS) you show in table 1 sample distribution of training split before and after dataset sampling. At the beginning of training, you have prints that show that although the "before" is identical, the after is far less. Will you publish your real augmentation mechanism, please?

table1 photo5974148511027016358

poodarchu commented 4 years ago

this fig is GT aug, and table 1 is dataset sampling, they are two modules.

poodarchu commented 4 years ago

dataset sampling is here: https://github.com/poodarchu/Det3D/blob/master/det3d/datasets/nuscenes/nuscenes.py#L71

MeyLavie commented 4 years ago

Thank you @poodarchu for your quick answer, I'm aware that those are two different modules, but you can see from this table, that since the number of instances and samples "after" the sampling are about 4.5 times larger you can understand that the augmentation module is also involved, isn't it?

I'm looking for the augmentation part that you used to enlarge your data by 4.5 times, the one you mentioned in your article.

poodarchu commented 4 years ago

Thank you @poodarchu for your quick answer, I'm aware that those are two different modules, but you can see from this table, that since the number of instances and samples "after" the sampling are about 4.5 times larger you can understand that the augmentation module is also involved, isn't it?

I'm looking for the augmentation part that you used to enlarge your data by 4.5 times, the one you mentioned in your article.

https://github.com/poodarchu/Det3D/blob/master/det3d/datasets/nuscenes/nuscenes.py#L71

poodarchu commented 4 years ago

Thank you @poodarchu for your quick answer, I'm aware that those are two different modules, but you can see from this table, that since the number of instances and samples "after" the sampling are about 4.5 times larger you can understand that the augmentation module is also involved, isn't it?

I'm looking for the augmentation part that you used to enlarge your data by 4.5 times, the one you mentioned in your article.

dataset is enlarged only due to dataset sampling here: https://github.com/poodarchu/Det3D/blob/master/det3d/datasets/nuscenes/nuscenes.py#L71, it has nothing to do with GT AUG.