Closed johninn closed 4 years ago
did you update the latest code? one of previous version could lead to oom.
Yes, I've updated the latest code. I'm trying run head only for 2 epoch, and than add bifpn. With head only false and batchsize=1, it will oom.
in that case, try using --optim sgd
.
if that can't help, you should switch to a small network like d5 or d6.
Hello, I got a question about GPU memory usage. I train a model with 46 class and head only false on d7. GPU is Tesla P100 16GB. I got out of memory even for batchsize=1. What GPU do you use for training COCO which has 80 class with efficientdet d7?