MILVLG / bottom-up-attention.pytorch

A PyTorch reimplementation of bottom-up-attention models
Apache License 2.0
294 stars 76 forks source link

about data splits #81

Closed wanboyang closed 3 years ago

wanboyang commented 3 years ago

In this project, the train, val and test sets contain 97224, 4949 and 5000, respectively. However, the train, val and test sets contain 98077, 5000 and 5000, respectively in https://github.com/peteanderson80/bottom-up-attention. What is the cause of this difference? Thanks

1219521375 commented 3 years ago

Detectron2 will automaticly remove 963 images with no usable annotations. 102114 images left. So we delete it directly.

wanboyang commented 3 years ago

Thanks for timely replying