Closed Wallace00V closed 5 years ago
when i translate mobilenet/mobilenetv2 to caffe,and load the caffe model in python, the GPU memory is huge; even the mobilenetv2 with width multi of 0.25
Answer found: add engine: Caffe in convolution layer
when i translate mobilenet/mobilenetv2 to caffe,and load the caffe model in python, the GPU memory is huge; even the mobilenetv2 with width multi of 0.25