MMdnn is a set of tools to help users inter-operate among different deep learning frameworks. E.g. model conversion and visualization. Convert models between Caffe, Keras, MXNet, Tensorflow, CNTK, PyTorch Onnx and CoreML.
Destination framework with version (tensorflow-gpu==1.10.1):
I have converted caffe model to tensorflow.
On caffe this model requires about 600 mb on GPU, but TF version requires 3Gb. It is a normal situation for the converted model?
Hi @GalacticF , I learned from this. In the Allowing GPU memory growth part, it says that Note that we do not release memory, since that can lead to even worse memory fragmentation. To turn this option on, set the option in the ConfigProto
Platform (ubuntu 16.04):
Python version:
Source framework with version (caffe-gpu):
Destination framework with version (tensorflow-gpu==1.10.1):
I have converted caffe model to tensorflow. On caffe this model requires about 600 mb on GPU, but TF version requires 3Gb. It is a normal situation for the converted model?