tensorflow / tfjs

A WebGL accelerated JavaScript library for training and deploying ML models.
https://js.tensorflow.org
Apache License 2.0
18.51k stars 1.94k forks source link

Error when converting: StatefulPartitionedCall/model_1/batch_normalization_13/AssignNewValue was passed float from Func/StatefulPartitionedCall/input/_7:0 incompatible with expected resource. #7234

Open edge7 opened 1 year ago

edge7 commented 1 year ago

System information

Describe the current behavior I am trying to convert a tf_saved_model like this: tensorflowjs_converter --input_format=tf_saved_model --output_format=tfjs_graph_model --signature_name=serving_default --saved_model_tags=serve pix_pix_256_16 pix_pix_256_16_web Describe the expected behavior It gets converted Standalone code to reproduce the issue Run the above command using this model

Other info / logs

`Traceback (most recent call last):
  File "/home/edge7/.local/lib/python3.10/site-packages/tensorflowjs/converters/tf_saved_model_conversion_v2.py", line 866, in _convert_tf_saved_model
    frozen_graph = _freeze_saved_model_v2(concrete_func, control_flow_v2)
  File "/home/edge7/.local/lib/python3.10/site-packages/tensorflowjs/converters/tf_saved_model_conversion_v2.py", line 528, in _freeze_saved_model_v2
    return convert_to_constants.convert_variables_to_constants_v2(
  File "/home/edge7/.local/lib/python3.10/site-packages/tensorflow/python/framework/convert_to_constants.py", line 1169, in convert_variables_to_constants_v2
    return _construct_concrete_function(func, output_graph_def,
  File "/home/edge7/.local/lib/python3.10/site-packages/tensorflow/python/framework/convert_to_constants.py", line 1094, in _construct_concrete_function
    new_func = wrap_function.function_from_graph_def(output_graph_def,
  File "/home/edge7/.local/lib/python3.10/site-packages/tensorflow/python/eager/wrap_function.py", line 655, in function_from_graph_def
    wrapped_import = wrap_function(_imports_graph_def, [])
  File "/home/edge7/.local/lib/python3.10/site-packages/tensorflow/python/eager/wrap_function.py", line 619, in wrap_function
    func_graph.func_graph_from_py_func(
  File "/home/edge7/.local/lib/python3.10/site-packages/tensorflow/python/framework/func_graph.py", line 1283, in func_graph_from_py_func
    func_outputs = python_func(*func_args, **func_kwargs)
  File "/home/edge7/.local/lib/python3.10/site-packages/tensorflow/python/eager/wrap_function.py", line 83, in __call__
    return self.call_with_variable_creator_scope(self._fn)(*args, **kwargs)
  File "/home/edge7/.local/lib/python3.10/site-packages/tensorflow/python/eager/wrap_function.py", line 89, in wrapped
    return fn(*args, **kwargs)
  File "/home/edge7/.local/lib/python3.10/site-packages/tensorflow/python/eager/wrap_function.py", line 649, in _imports_graph_def
    importer.import_graph_def(graph_def, name="")
  File "/home/edge7/.local/lib/python3.10/site-packages/tensorflow/python/util/deprecation.py", line 561, in new_func
    return func(*args, **kwargs)
  File "/home/edge7/.local/lib/python3.10/site-packages/tensorflow/python/framework/importer.py", line 403, in import_graph_def
    return _import_graph_def_internal(
  File "/home/edge7/.local/lib/python3.10/site-packages/tensorflow/python/framework/importer.py", line 505, in _import_graph_def_internal
    raise ValueError(str(e))
ValueError: Input 0 of node StatefulPartitionedCall/model_1/batch_normalization_13/AssignNewValue was passed float from Func/StatefulPartitionedCall/input/_7:0 incompatible with expected resource.

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/home/edge7/.local/bin/tensorflowjs_converter", line 8, in <module>
    sys.exit(pip_main())
  File "/home/edge7/.local/lib/python3.10/site-packages/tensorflowjs/converters/converter.py", line 827, in pip_main
    main([' '.join(sys.argv[1:])])
  File "/home/edge7/.local/lib/python3.10/site-packages/tensorflowjs/converters/converter.py", line 831, in main
    convert(argv[0].split(' '))
  File "/home/edge7/.local/lib/python3.10/site-packages/tensorflowjs/converters/converter.py", line 817, in convert
    _dispatch_converter(input_format, output_format, args, quantization_dtype_map,
  File "/home/edge7/.local/lib/python3.10/site-packages/tensorflowjs/converters/converter.py", line 528, in _dispatch_converter
    tf_saved_model_conversion_v2.convert_tf_saved_model(
  File "/home/edge7/.local/lib/python3.10/site-packages/tensorflowjs/converters/tf_saved_model_conversion_v2.py", line 1020, in convert_tf_saved_model
    _convert_tf_saved_model(output_dir, saved_model_dir=saved_model_dir,
  File "/home/edge7/.local/lib/python3.10/site-packages/tensorflowjs/converters/tf_saved_model_conversion_v2.py", line 876, in _convert_tf_saved_model
    frozen_initializer_graph) = _freeze_saved_model_v1(saved_model_dir,
  File "/home/edge7/.local/lib/python3.10/site-packages/tensorflowjs/converters/tf_saved_model_conversion_v2.py", line 497, in _freeze_saved_model_v1
    frozen_graph_def = tf.compat.v1.graph_util.convert_variables_to_constants(
  File "/home/edge7/.local/lib/python3.10/site-packages/tensorflow/python/util/deprecation.py", line 357, in new_func
    return func(*args, **kwargs)
  File "/home/edge7/.local/lib/python3.10/site-packages/tensorflow/python/framework/graph_util_impl.py", line 317, in convert_variables_to_constants
    ret = convert_to_constants.convert_variables_to_constants_from_session_graph(
  File "/home/edge7/.local/lib/python3.10/site-packages/tensorflow/python/framework/convert_to_constants.py", line 1283, in convert_variables_to_constants_from_session_graph
    converter_data=_SessionConverterData(
  File "/home/edge7/.local/lib/python3.10/site-packages/tensorflow/python/framework/convert_to_constants.py", line 943, in __init__
    graph_def = graph_util.extract_sub_graph(graph_def, output_node_names)
  File "/home/edge7/.local/lib/python3.10/site-packages/tensorflow/python/util/deprecation.py", line 357, in new_func
    return func(*args, **kwargs)
  File "/home/edge7/.local/lib/python3.10/site-packages/tensorflow/python/framework/graph_util_impl.py", line 251, in extract_sub_graph
    _assert_nodes_are_present(name_to_node, dest_nodes)
  File "/home/edge7/.local/lib/python3.10/site-packages/tensorflow/python/framework/graph_util_impl.py", line 204, in _assert_nodes_are_present
    assert d in name_to_node, "%s is not in graph" % d`

Please note the following is the model summary, and as you can see I have got some batch_normalization and dropout layer (both with training=true):

input_1 (InputLayer)           [(None, 256, 256, 3  0           []                               
                                )]                                                                

 conv2d (Conv2D)                (None, 128, 128, 8)  392         ['input_1[0][0]']                

 leaky_re_lu (LeakyReLU)        (None, 128, 128, 8)  0           ['conv2d[0][0]']                 

 conv2d_1 (Conv2D)              (None, 64, 64, 16)   2064        ['leaky_re_lu[0][0]']            

 batch_normalization (BatchNorm  (None, 64, 64, 16)  64          ['conv2d_1[0][0]']               
 alization)                                                                                       

 leaky_re_lu_1 (LeakyReLU)      (None, 64, 64, 16)   0           ['batch_normalization[0][0]']    

 conv2d_2 (Conv2D)              (None, 32, 32, 32)   8224        ['leaky_re_lu_1[0][0]']          

 batch_normalization_1 (BatchNo  (None, 32, 32, 32)  128         ['conv2d_2[0][0]']               
 rmalization)                                                                                     

 leaky_re_lu_2 (LeakyReLU)      (None, 32, 32, 32)   0           ['batch_normalization_1[0][0]']  

 conv2d_3 (Conv2D)              (None, 16, 16, 64)   32832       ['leaky_re_lu_2[0][0]']          

 batch_normalization_2 (BatchNo  (None, 16, 16, 64)  256         ['conv2d_3[0][0]']               
 rmalization)                                                                                     

 leaky_re_lu_3 (LeakyReLU)      (None, 16, 16, 64)   0           ['batch_normalization_2[0][0]']  

 conv2d_4 (Conv2D)              (None, 8, 8, 64)     65600       ['leaky_re_lu_3[0][0]']          

 batch_normalization_3 (BatchNo  (None, 8, 8, 64)    256         ['conv2d_4[0][0]']               
 rmalization)                                                                                     

 leaky_re_lu_4 (LeakyReLU)      (None, 8, 8, 64)     0           ['batch_normalization_3[0][0]']  

 conv2d_5 (Conv2D)              (None, 4, 4, 64)     65600       ['leaky_re_lu_4[0][0]']          

 batch_normalization_4 (BatchNo  (None, 4, 4, 64)    256         ['conv2d_5[0][0]']               
 rmalization)                                                                                     

 leaky_re_lu_5 (LeakyReLU)      (None, 4, 4, 64)     0           ['batch_normalization_4[0][0]']  

 conv2d_6 (Conv2D)              (None, 2, 2, 64)     65600       ['leaky_re_lu_5[0][0]']          

 batch_normalization_5 (BatchNo  (None, 2, 2, 64)    256         ['conv2d_6[0][0]']               
 rmalization)                                                                                     

 leaky_re_lu_6 (LeakyReLU)      (None, 2, 2, 64)     0           ['batch_normalization_5[0][0]']  

 conv2d_7 (Conv2D)              (None, 1, 1, 512)    524800      ['leaky_re_lu_6[0][0]']          

 activation (Activation)        (None, 1, 1, 512)    0           ['conv2d_7[0][0]']               

 conv2d_transpose (Conv2DTransp  (None, 2, 2, 64)    524288      ['activation[0][0]']             
 ose)                                                                                             

 batch_normalization_6 (BatchNo  (None, 2, 2, 64)    256         ['conv2d_transpose[0][0]']       
 rmalization)                                                                                     

 dropout (Dropout)              (None, 2, 2, 64)     0           ['batch_normalization_6[0][0]']  

 concatenate (Concatenate)      (None, 2, 2, 128)    0           ['dropout[0][0]',                
                                                                  'leaky_re_lu_6[0][0]']          

 activation_1 (Activation)      (None, 2, 2, 128)    0           ['concatenate[0][0]']            

 conv2d_transpose_1 (Conv2DTran  (None, 4, 4, 64)    131072      ['activation_1[0][0]']           
 spose)                                                                                           

 batch_normalization_7 (BatchNo  (None, 4, 4, 64)    256         ['conv2d_transpose_1[0][0]']     
 rmalization)                                                                                     

 dropout_1 (Dropout)            (None, 4, 4, 64)     0           ['batch_normalization_7[0][0]']  

 concatenate_1 (Concatenate)    (None, 4, 4, 128)    0           ['dropout_1[0][0]',              
                                                                  'leaky_re_lu_5[0][0]']          

 activation_2 (Activation)      (None, 4, 4, 128)    0           ['concatenate_1[0][0]']          

 conv2d_transpose_2 (Conv2DTran  (None, 8, 8, 64)    131072      ['activation_2[0][0]']           
 spose)                                                                                           

 batch_normalization_8 (BatchNo  (None, 8, 8, 64)    256         ['conv2d_transpose_2[0][0]']     
 rmalization)                                                                                     

 dropout_2 (Dropout)            (None, 8, 8, 64)     0           ['batch_normalization_8[0][0]']  

 concatenate_2 (Concatenate)    (None, 8, 8, 128)    0           ['dropout_2[0][0]',              
                                                                  'leaky_re_lu_4[0][0]']          

 activation_3 (Activation)      (None, 8, 8, 128)    0           ['concatenate_2[0][0]']          

 conv2d_transpose_3 (Conv2DTran  (None, 16, 16, 64)  131072      ['activation_3[0][0]']           
 spose)                                                                                           

 batch_normalization_9 (BatchNo  (None, 16, 16, 64)  256         ['conv2d_transpose_3[0][0]']     
 rmalization)                                                                                     

 dropout_3 (Dropout)            (None, 16, 16, 64)   0           ['batch_normalization_9[0][0]']  

 concatenate_3 (Concatenate)    (None, 16, 16, 128)  0           ['dropout_3[0][0]',              
                                                                  'leaky_re_lu_3[0][0]']          

 activation_4 (Activation)      (None, 16, 16, 128)  0           ['concatenate_3[0][0]']          

 conv2d_transpose_4 (Conv2DTran  (None, 32, 32, 32)  65536       ['activation_4[0][0]']           
 spose)                                                                                           

 batch_normalization_10 (BatchN  (None, 32, 32, 32)  128         ['conv2d_transpose_4[0][0]']     
 ormalization)                                                                                    

 concatenate_4 (Concatenate)    (None, 32, 32, 64)   0           ['batch_normalization_10[0][0]', 
                                                                  'leaky_re_lu_2[0][0]']          

 activation_5 (Activation)      (None, 32, 32, 64)   0           ['concatenate_4[0][0]']          

 conv2d_transpose_5 (Conv2DTran  (None, 64, 64, 16)  16384       ['activation_5[0][0]']           
 spose)                                                                                           

 batch_normalization_11 (BatchN  (None, 64, 64, 16)  64          ['conv2d_transpose_5[0][0]']     
 ormalization)                                                                                    

 concatenate_5 (Concatenate)    (None, 64, 64, 32)   0           ['batch_normalization_11[0][0]', 
                                                                  'leaky_re_lu_1[0][0]']          

 activation_6 (Activation)      (None, 64, 64, 32)   0           ['concatenate_5[0][0]']          

 conv2d_transpose_6 (Conv2DTran  (None, 128, 128, 8)  4096       ['activation_6[0][0]']           
 spose)                                                                                           

 batch_normalization_12 (BatchN  (None, 128, 128, 8)  32         ['conv2d_transpose_6[0][0]']     
 ormalization)                                                                                    

 concatenate_6 (Concatenate)    (None, 128, 128, 16  0           ['batch_normalization_12[0][0]', 
                                )                                 'leaky_re_lu[0][0]']            

 activation_7 (Activation)      (None, 128, 128, 16  0           ['concatenate_6[0][0]']          
                                )                                                                 

 conv2d_transpose_7 (Conv2DTran  (None, 256, 256, 3)  771        ['activation_7[0][0]']           
 spose)                                                                                           

==================================================================================================
Total params: 1,771,867
Trainable params: 1,770,635
Non-trainable params: 1,232

edge7 commented 1 year ago

Please note the above error just appears if the output model is set to be a graph_model, If I remove the following option: --output_format=tfjs_graph_model it works, however a graph model is preferred as it should be optimized for inference.

edge7 commented 1 year ago

Hi,

allow me to link this one to this as they are strictly related each other

pyu10055 commented 1 year ago

hi @edge7 This is a known error for TensorFlow as tracked https://github.com/tensorflow/tensorflow/issues/31668 The fix seems to be set the trainable to false on the batch normalization layers before you export to saved model. Can you give it a try? thanks

edge7 commented 1 year ago

yes, that would work, but then I'd need to call the model with training=True which lowers a lot performance.