Closed ruodingt closed 4 years ago
@ruodingt
why don't you lower the batch size to 4?
You are totally right.
In your experiment script and paper, I saw you are using 16 as batch size. May I ask whether 16 is for per-GPU or not?
Thank you.
@ruodingt > from the detectron2 documentation:
# Number of images per batch across all machines.
# If we have 16 GPUs and IMS_PER_BATCH = 32,
# each GPU will see 2 images per batch.
_C.SOLVER.IMS_PER_BATCH = 16
Hi @youngwanLEE I was trying centermask2 on a different dataset other than COCO. I use a single V100 GPU.
I put Batch size to 8 and remain MIN_SIZE_TRAIN unchanged. The config file I used is
centermask_V_39_eSE_FPN_ms_3x.yaml
Yet I still got CUDA OOM error.
I couldn't see other factors that could leading to this OOM error.
Could you please give me some tips?