Tencent / PocketFlow

An Automatic Model Compression (AutoMC) framework for developing smaller and faster AI applications.
https://pocketflow.github.io
Other
2.78k stars 490 forks source link

converting *.pb to *.tflite using chn_prune #248

Open ZhanPython opened 5 years ago

ZhanPython commented 5 years ago

When I use chn_prune to test the self defined model fmnist and simple cnn, when the .pb file convert to .tflite file, this error occurs:

INFO:tensorflow:/home/adminusl/PocketFlow/models/model.pb generated INFO:tensorflow:/home/adminusl/PocketFlow/models/model.pb -> /home/adminusl/PocketFlow/models/model.tflite 2019-03-04 23:06:16.109140: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1511] Adding visible gpu devices: 0 2019-03-04 23:06:16.109187: I tensorflow/core/common_runtime/gpu/gpu_device.cc:982] Device interconnect StreamExecutor with strength 1 edge matrix: 2019-03-04 23:06:16.109212: I tensorflow/core/common_runtime/gpu/gpu_device.cc:988] 0 2019-03-04 23:06:16.109217: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1001] 0: N 2019-03-04 23:06:16.109350: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 5286 MB memory) -> physical GPU (device: 0, name: GeForce GTX 1060, pci bus id: 0000:01:00.0, compute capability: 6.1) INFO:tensorflow:unable to generate a *.tflite model Traceback (most recent call last): File "export_pb_tflite_models.py", line 394, in <module> tf.app.run() File "/home/adminusl/anaconda3/lib/python3.6/site-packages/tensorflow/python/platform/app.py", line 125, in run _sys.exit(main(argv)) File "export_pb_tflite_models.py", line 385, in main export_pb_tflite_model(net, meta_path, pb_path, tflite_path) File "export_pb_tflite_models.py", line 353, in export_pb_tflite_model convert_pb_model_to_tflite(net, pb_path, tflite_path) File "export_pb_tflite_models.py", line 241, in convert_pb_model_to_tflite raise err File "export_pb_tflite_models.py", line 235, in convert_pb_model_to_tflite tflite_model = converter.convert() File "/home/adminusl/anaconda3/lib/python3.6/site-packages/tensorflow/contrib/lite/python/lite.py", line 453, in convert **converter_kwargs) File "/home/adminusl/anaconda3/lib/python3.6/site-packages/tensorflow/contrib/lite/python/convert.py", line 342, in toco_convert_impl input_data.SerializeToString()) File "/home/adminusl/anaconda3/lib/python3.6/site-packages/tensorflow/contrib/lite/python/convert.py", line 135, in toco_convert_protos (stdout, stderr)) RuntimeError: TOCO failed see console for info. b'2019-03-04 23:06:20.388184: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA\n2019-03-04 23:06:20.447590: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:964] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n2019-03-04 23:06:20.447949: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1432] Found device 0 with properties: \nname: GeForce GTX 1060 major: 6 minor: 1 memoryClockRate(GHz): 1.6705\npciBusID: 0000:01:00.0\ntotalMemory: 5.93GiB freeMemory: 5.35GiB\n2019-03-04 23:06:20.447961: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1511] Adding visible gpu devices: 0\n2019-03-04 23:06:20.656722: I tensorflow/core/common_runtime/gpu/gpu_device.cc:982] Device interconnect StreamExecutor with strength 1 edge matrix:\n2019-03-04 23:06:20.656769: I tensorflow/core/common_runtime/gpu/gpu_device.cc:988] 0 \n2019-03-04 23:06:20.656775: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1001] 0: N \n2019-03-04 23:06:20.656929: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 5113 MB memory) -> physical GPU (device: 0, name: GeForce GTX 1060, pci bus id: 0000:01:00.0, compute capability: 6.1)\n2019-03-04 23:06:20.802166: I tensorflow/contrib/lite/toco/graph_transformations/graph_transformations.cc:39] Before Removing unused ops: 30 operators, 46 arrays (0 quantized)\n2019-03-04 23:06:20.802359: I tensorflow/contrib/lite/toco/graph_transformations/graph_transformations.cc:39] Before general graph transformations: 30 operators, 46 arrays (0 quantized)\n2019-03-04 23:06:20.802632: F tensorflow/contrib/lite/toco/graph_transformations/propagate_fixed_sizes.cc:458] Check failed: input_flat_size == RequiredBufferSizeForShape(output_shape) (3136 vs. 3072)Input cannot be reshaped to requested dimensions for Reshape op with output "model/flatten/Reshape". Are your input shapes correct?\nAborted (core dumped)\n' None How to tackle this issue?

WenjieDu commented 5 years ago

Got the same problem. Did you solve it? @ZhanPython

A1exy commented 4 years ago

Got the same problem. Did you solve it? @ZhanPython @WenjayDu

A1exy commented 4 years ago

Upgrade tensorflow from 1.11 to 1.14 problem solverd !

GauriDhande commented 4 years ago

Got the same problem. Upgraded to 1.14.0, still the problem not solved.