Closed erpingzi closed 2 years ago
Same error. It takes more GPU memory during evaluation (CUDA out-of-memory runtime error). don't know why
You should reduce batchsize, be note that some old config files are fit for old version detectron2, d2 had been updated IMG_PER_BATCH config as default to 8 gpus config.
@erpingzi @tctco could you solve this issue?
@jinfagang facing the same issue even after reducing the batchsize
@erpingzi @tctco could you solve this issue?
No. Switched to YOLACT written by myself.
您好,很感谢您提供代码,我在使用代码时遇到两个问题,希望您能帮忙解答一下
当我使用yolomask_8gpu.yaml在coco上做evaluation时,到第27张图片进入model得到输出后显存会增加300多M,到第47张图片又会同样增加,然后就报错cuda out of memory。
coco-instance中包含两个config,其中yolomask.yaml中IMS_PER_BATCH:3,但是默认的REFERENCE_WORLD_SIZE: 8,运行时会报错提示IMS_PER_BATCH是无效的,查看源码提示IMS_PER_BATCH%REFERENCE_WORLD_SIZE==0