tensorflow / models

Models and examples built with TensorFlow
Other
76.99k stars 45.79k forks source link

unable to convert trained .pb file to .dlc #7954

Open 0chandansharma opened 4 years ago

0chandansharma commented 4 years ago

I want to import my trained model .pb file to vision_ai_devkit (altek camera)

i have trained my model with tensorflow 1.14.0

I am using SNPE-1.25.0 to convert my model from .pb to .dlc ` from azureml.contrib.iot.model_converters import SnpeConverter

submit a compile request to convert the model to SNPE compatible DLC file

compile_request = SnpeConverter.convert_tf_model( ws, source_model=model, input_node="image_tensor", input_dims="1,300,300,3", outputs_nodes = ["detection_boxes","detection_classes","detection_scores"], allow_unconsumed_nodes = True) print(compile_request._operation_id)`

status: `Running..... Failed Operation d11c79e0-5393-4b23-8e63-d558647261d1 completed, operation state "Failed" sas url to download model conversion logs https://chandan2197833874.blob.core.windows.net/azureml/LocalUpload/7c8659f5f3c74d5a8bba937801bf5b1c/conversion_log?sv=2019-02-02&sr=b&sig=UfR4bWNVUAKYIJfLxDW4W1fbBTwM0s9P%2Bhj1H74wDVg%3D&st=2019-12-18T13%3A31%3A51Z&se=2019-12-18T21%3A41%3A51Z&sp=r [2019-12-18 13:41:31Z]: Starting model conversion process [2019-12-18 13:41:31Z]: Downloading model for conversion [2019-12-18 13:41:34Z]: Converting model [2019-12-18 13:41:37Z]: converter std: Executing python /snpe-1.25.0/bin/x86_64-linux-clang/snpe-tensorflow-to-dlc --graph /tmp/du2zzymh.if1/input/drivermodel/frozen_inference_graph.pb -i image_tensor 1,300,300,3 --dlc /tmp/du2zzymh.if1/output/model.dlc --out_node detection_boxes --out_node detection_classes --out_node detection_scores --allow_unconsumed_nodes in /app [2019-12-18 13:41:37Z]: converter std: Stream stdout is True [2019-12-18 13:41:37Z]: converter std: 2019-12-18 13:41:36.710889: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA [2019-12-18 13:41:37Z]: converter err: Traceback (most recent call last): [2019-12-18 13:41:37Z]: converter err: File "/utils/convert_model_tf", line 91, in [2019-12-18 13:41:37Z]: converter std: 2019-12-18 13:41:36.718844: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 2294685000 Hz [2019-12-18 13:41:37Z]: converter std: 2019-12-18 13:41:36.721412: I tensorflow/compiler/xla/service/service.cc:150] XLA service 0x4c918e0 executing computations on platform Host. Devices: [2019-12-18 13:41:37Z]: converter err: main() [2019-12-18 13:41:37Z]: converter err: File "/utils/convert_model_tf", line 83, in main [2019-12-18 13:41:37Z]: converter std: 2019-12-18 13:41:36.721446: I tensorflow/compiler/xla/service/service.cc:158] StreamExecutor device (0): , [2019-12-18 13:41:37Z]: converter err: output_log = process_utils.check_call(command), [2019-12-18 13:41:37Z]: converter err: File "/utils/process_utils.py", line 43, in check_call [2019-12-18 13:41:37Z]: converter std: 2019-12-18 13:41:36,879 - 109 - ERROR - Encountered Error: NodeDef mentions attr 'half_pixel_centers' not in Op<name=ResizeBilinear; signature=images:T, size:int32 -> resized_images:float; attr=T:type,allowed=[DT_INT8, DT_UINT8, DT_INT16, DT_UINT16, DT_INT32, DT_INT64, DT_BFLOAT16, DT_HALF, DT_FLOAT, DT_DOUBLE]; attr=align_corners:bool,default=false>; NodeDef: {{node Preprocessor/map/while/ResizeImage/resize/ResizeBilinear}}. (Check whether your GraphDef-interpreting binary is up to date with your GraphDef-generating binary.). [2019-12-18 13:41:37Z]: Conversion failed: Converter returned exit code: 1 [2019-12-18 13:41:37Z]: Conversion completed with result Failure

Model convert failed, unexpected error response: {'code': 'ModelConvertFailed', 'details': [{'code': 'CompileModelFailed', 'message': 'aml://artifact/LocalUpload/7c8659f5f3c74d5a8bba937801bf5b1c/conversion_log'}]} False`

even i tried to change the parameter for input layer as input_node="Preprocessor/sub"

i am getting this error `Running...... Failed Operation 7d13145f-2380-42c9-aba4-33763e03a1d8 completed, operation state "Failed" sas url to download model conversion logs https://chandan2197833874.blob.core.windows.net/azureml/LocalUpload/3055b4b964b74a37a12b2ff22df67e8b/conversion_log?sv=2019-02-02&sr=b&sig=cPr2899gFoZ8k4lo%2BYYY%2B5ugNiIc4hS0jfOn4yyCbtA%3D&st=2019-12-18T13%3A34%3A32Z&se=2019-12-18T21%3A44%3A32Z&sp=r [2019-12-18 13:44:01Z]: Starting model conversion process [2019-12-18 13:44:01Z]: Downloading model for conversion [2019-12-18 13:44:02Z]: Converting model [2019-12-18 13:44:04Z]: converter err: Traceback (most recent call last): [2019-12-18 13:44:04Z]: converter std: Executing python /snpe-1.25.0/bin/x86_64-linux-clang/snpe-tensorflow-to-dlc --graph /tmp/5gkxtk0h.3qf/input/drivermodel/frozen_inference_graph.pb -i Preprocessor/sub 1,300,300,3 --dlc /tmp/5gkxtk0h.3qf/output/model.dlc --out_node detection_boxes --out_node detection_classes --out_node detection_scores --allow_unconsumed_nodes in /app [2019-12-18 13:44:04Z]: converter err: File "/utils/convert_model_tf", line 91, in [2019-12-18 13:44:04Z]: converter std: Stream stdout is True [2019-12-18 13:44:04Z]: converter std: 2019-12-18 13:44:03.802369: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA [2019-12-18 13:44:04Z]: converter err: main() [2019-12-18 13:44:04Z]: converter err: File "/utils/convert_model_tf", line 83, in main [2019-12-18 13:44:04Z]: converter std: 2019-12-18 13:44:03.808567: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 2394450000 Hz [2019-12-18 13:44:04Z]: converter err: output_log = process_utils.check_call(command), [2019-12-18 13:44:04Z]: converter err: File "/utils/process_utils.py", line 43, in check_call [2019-12-18 13:44:04Z]: converter std: 2019-12-18 13:44:03.809597: I tensorflow/compiler/xla/service/service.cc:150] XLA service 0x56dae60 executing computations on platform Host. Devices: [2019-12-18 13:44:04Z]: converter err: raise subprocess.CalledProcessError(retcode, ' '.join(commands), output=out) [2019-12-18 13:44:04Z]: converter std: 2019-12-18 13:44:03.809627: I tensorflow/compiler/xla/service/service.cc:158] StreamExecutor device (0): , [2019-12-18 13:44:04Z]: converter err: subprocess.CalledProcessError: Command 'python /snpe-1.25.0/bin/x86_64-linux-clang/snpe-tensorflow-to-dlc --graph /tmp/5gkxtk0h.3qf/input/drivermodel/frozen_inference_graph.pb -i Preprocessor/sub 1,300,300,3 --dlc /tmp/5gkxtk0h.3qf/output/model.dlc --out_node detection_boxes --out_node detection_classes --out_node detection_scores --allow_unconsumed_nodes' returned non-zero exit status 1 [2019-12-18 13:44:04Z]: converter std: 2019-12-18 13:44:03,943 - 109 - ERROR - Encountered Error: NodeDef mentions attr 'half_pixel_centers' not in Op<name=ResizeBilinear; signature=images:T, size:int32 -> resized_images:float; attr=T:type,allowed=[DT_INT8, DT_UINT8, DT_INT16, DT_UINT16, DT_INT32, DT_INT64, DT_BFLOAT16, DT_HALF, DT_FLOAT, DT_DOUBLE]; attr=align_corners:bool,default=false>; NodeDef: {{node Preprocessor/map/while/ResizeImage/resize/ResizeBilinear}}. (Check whether your GraphDef-interpreting binary is up to date with your GraphDef-generating binary.). [2019-12-18 13:44:04Z]: converter std: Traceback (most recent call last): [2019-12-18 13:44:04Z]: converter std: File "/snpe-1.25.0/bin/x86_64-linux-clang/snpe-tensorflow-to-dlc", line 99, in main [2019-12-18 13:44:04Z]: converter std: model = loader.load(args.graph, in_nodes, in_dims, args.in_type, args.out_node, session) [2019-12-18 13:44:04Z]: converter std: File "/snpe-1.25.0/lib/python/snpe/converters/tensorflow/loader.py", line 50, in load [2019-12-18 13:44:04Z]: converter std: graph_def = self.__import_graph(graph_pb_or_meta_path, session, out_node_names) [2019-12-18 13:44:04Z]: converter std: File "/snpe-1.25.0/lib/python/snpe/converters/tensorflow/loader.py", line 108, in __import_graph [2019-12-18 13:44:04Z]: converter std: tf.import_graph_def(graph_def, name="") [2019-12-18 13:44:04Z]: converter std: File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/util/deprecation.py", line 507, in new_func [2019-12-18 13:44:04Z]: converter std: return func(*args, **kwargs) [2019-12-18 13:44:04Z]: converter std: File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/importer.py", line 430, in import_graph_def [2019-12-18 13:44:04Z]: converter std: raise ValueError(str(e)) [2019-12-18 13:44:04Z]: converter std: ValueError: NodeDef mentions attr 'half_pixel_centers' not in Op<name=ResizeBilinear; signature=images:T, size:int32 -> resized_images:float; attr=T:type,allowed=[DT_INT8, DT_UINT8, DT_INT16, DT_UINT16, DT_INT32, DT_INT64, DT_BFLOAT16, DT_HALF, DT_FLOAT, DT_DOUBLE]; attr=align_corners:bool,default=false>; NodeDef: {{node Preprocessor/map/while/ResizeImage/resize/ResizeBilinear}}. (Check whether your GraphDef-interpreting binary is up to date with your GraphDef-generating binary.). [2019-12-18 13:44:04Z]: converter std: Execution took 0.001766s for ['python', '/snpe-1.25.0/bin/x86_64-linux-clang/snpe-tensorflow-to-dlc', '--graph', '/tmp/5gkxtk0h.3qf/input/drivermodel/frozen_inference_graph.pb', '-i', u'Preprocessor/sub', u'1,300,300,3', '--dlc', '/tmp/5gkxtk0h.3qf/output/model.dlc', '--out_node', u'detection_boxes', '--out_node', u'detection_classes', '--out_node', u'detection_scores', '--allow_unconsumed_nodes'] in /app [2019-12-18 13:44:04Z]: Conversion failed: Converter returned exit code: 1 [2019-12-18 13:44:04Z]: Conversion completed with result Failure

Model convert failed, unexpected error response: {'code': 'ModelConvertFailed', 'details': [{'code': 'CompileModelFailed', 'message': 'aml://artifact/LocalUpload/3055b4b964b74a37a12b2ff22df67e8b/conversion_log'}]} False`

How to convert custom object detection from .pb to .dlc and .tflite (to import my model in vision AI camera)

Thanks

tensorflowbutler commented 4 years ago

Thank you for your post. We noticed you have not filled out the following field in the issue template. Could you update them if they are relevant in your case, or leave them as N/A? Thanks. What is the top-level directory of the model you are using Have I written custom code OS Platform and Distribution TensorFlow installed from TensorFlow version Bazel version CUDA/cuDNN version GPU model and memory Exact command to reproduce

0chandansharma commented 4 years ago

What is the top-level directory of the model you are using>>>>Object detection Have I written custom code>>> N/A OS Platform and Distribution>>>Windows 10 TensorFlow installed from>>>>pip install tensorflow TensorFlow version>>> 1.14.0 Bazel version>>>N/A CUDA/cuDNN version>>> 10 GPU model and memory >>> rtx-2060 Exact command to reproduce>>>>>> snpe-tensorflow-to-dlc --graph /input/model/frozen_inference_graph.pb -i Preprocessor/sub 1,300,300,3 --dlc /output/model.dlc --out_node detection_boxes --out_node detection_classes --out_node detection_scores --allow_unconsumed_nodes

model::: MOBILENET_SSD