Open Vadim2S opened 2 years ago
Well, inference uses batch size of one, so BATCH_SIZE_PER_GPU has no effect. Image size and GPU memory size are the only things that make OOM. This means you should reduce the image size or buy a better GPU.
You mean H and V size of tested image files? OK.
P.S. What is IMAGE_SIZE properties of TRAIN and TEST sections of CAT_full.yaml file?
In train.py: TRAIN.BATCH_SIZE_PER_GPU and TRAIN.IMAGE_SIZE determine the batch size and image crop size, respectively. In infer.py: Those are ignored. Batch size is fixed to 1 and image size becomes the image size of an input image. Also, note that reducing image size using resize operation is in fact applying another image manipulation. So, the performance may deteriorate. One remedy is to use cropping. Please use grid-aligned cropping as described in the paper. Avoid random cropping. But, cropped images lose overall content, so the performance may deteriorate also (but better than resizing I think).
Very interesting! Sometimes I am want test 4000x6000 images and here no GPU with such RAM available.
Conculsion: The best way for infer is grid-aligned crop large image to several of small images, test it separately and combine back to one big image. Right?
Yes.
I am have now only 8GB RAM GPU. How I am can run infer.py? I am try play with CAT_Full.yaml, TEST section, BASE_SIZE and BATCH_SIZE_PER_GPU but anyways get Out of GPU Memory error.