Closed njtuzzy closed 7 years ago
This is a big model. I use batch_size=16
with 8 P40 (24G memory each).
You can modify the fpn, like use RetinaFPN18
:
def RetinaFPN18():
return RetinaFPN(Bottleneck, [2,2,2,2])
And also decrease the input_size
from 600 to like 400.
But I don't think you can squeeze it into one 12G memory GPU...
Acctually, when i downsample the input image size to 224, it will take about 4G memory size and should be ok for training, the disavantage is that the down scale is a little bit large :(
thanks
You have 8 P40, so crazy.
@kuangliu I am using a titian X graphic card with 12G memory , somehow, when i use the bach_size=2, there are still out or memory, may i ask the configuration of hardware and software you are using? thanks