hustvl / TopFormer

TopFormer: Token Pyramid Transformer for Mobile Semantic Segmentation, CVPR2022
Other
373 stars 42 forks source link

how to modify the batch size during inference? #15

Closed wmkai closed 2 years ago

wmkai commented 2 years ago

I tried to modify the value samples_per_gpu from 2 to 1 in config file, but the elapsed time in inference log seems to change not much. Do I have something wrong?

speedinghzl commented 2 years ago

The change of elapsed time does not indicate the batch size per GPU precisely. It could be better to print the batch size when testing.

wmkai commented 2 years ago

I see, thanks a lot.