I have tried to optimize my custom frozen model to run on TensorRT using create_inference_graph(), however, the output was larger than the original model (my model is around 200MB, but after converting it's more than 2GB). Is it normal that the converted model size is bigger than the orginal one? Below are my settings:
Also, because the model was way too big, I couldn't serialize it to .pb file, so that I had this error:
[libprotobuf ERROR external/protobuf_archive/src/google/protobuf/message_lite.cc:289] Exceeded maximum protobuf size of 2GB: 2756916500
Guys,
I have tried to optimize my custom frozen model to run on TensorRT using
create_inference_graph()
, however, the output was larger than the original model (my model is around 200MB, but after converting it's more than 2GB). Is it normal that the converted model size is bigger than the orginal one? Below are my settings:Also, because the model was way too big, I couldn't serialize it to .pb file, so that I had this error:
[libprotobuf ERROR external/protobuf_archive/src/google/protobuf/message_lite.cc:289] Exceeded maximum protobuf size of 2GB: 2756916500
Has anyone been able to solve these issues?