fastmachinelearning / hls4ml

Machine learning on FPGAs using HLS
https://fastmachinelearning.org/hls4ml
Apache License 2.0
1.19k stars 390 forks source link

Unsupported layer error #753

Closed sandeep1404 closed 1 year ago

sandeep1404 commented 1 year ago

Hi, I am using the latest master branch of hls4ml, I am trying to synthesize my model which is given below

Model: "model_4"
__________________________________________________________________________________________________
 Layer (type)                   Output Shape         Param #     Connected to                     
==================================================================================================
 input (InputLayer)             [(None, 128, 128, 3  0           []                               
                                )]                                                                

 conv2d_94 (Conv2D)             (None, 128, 128, 16  448         ['input[0][0]']                  
                                )                                                                 

 conv2d_95 (Conv2D)             (None, 128, 128, 16  2320        ['conv2d_94[0][0]']              
                                )                                                                 

 conv2d_96 (Conv2D)             (None, 128, 128, 16  2320        ['conv2d_95[0][0]']              
                                )                                                                 

 conv2d_97 (Conv2D)             (None, 128, 128, 16  2320        ['conv2d_96[0][0]']              
                                )                                                                 

 add_37 (Add)                   (None, 128, 128, 16  0           ['conv2d_97[0][0]',              
                                )                                 'conv2d_94[0][0]']              

 conv2d_98 (Conv2D)             (None, 128, 128, 16  2320        ['add_37[0][0]']                 
                                )                                                                 

 conv2d_99 (Conv2D)             (None, 128, 128, 16  2320        ['conv2d_98[0][0]']              
                                )                                                                 

 add_38 (Add)                   (None, 128, 128, 16  0           ['conv2d_99[0][0]',              
                                )                                 'add_37[0][0]']                 

 activation_22 (Activation)     (None, 128, 128, 16  0           ['add_38[0][0]']                 
                                )                                                                 

 conv2d_100 (Conv2D)            (None, 128, 128, 16  2320        ['activation_22[0][0]']          
                                )                                                                 

 conv2d_101 (Conv2D)            (None, 128, 128, 16  272         ['conv2d_100[0][0]']             
                                )                                                                 

 add_39 (Add)                   (None, 128, 128, 16  0           ['conv2d_101[0][0]',             
                                )                                 'activation_22[0][0]']          

 activation_23 (Activation)     (None, 128, 128, 16  0           ['add_39[0][0]']                 
                                )                                                                 

 global_average_pooling2d_19 (G  (None, 16)          0           ['activation_23[0][0]']          
 lobalAveragePooling2D)                                                                           

 tf.expand_dims_22 (TFOpLambda)  (None, 1, 16)       0           ['global_average_pooling2d_19[0][
                                                                 0]']                             

 tf.expand_dims_23 (TFOpLambda)  (None, 1, 1, 16)    0           ['tf.expand_dims_22[0][0]']      

 conv2d_102 (Conv2D)            (None, 1, 1, 2)      290         ['tf.expand_dims_23[0][0]']      

 conv2d_103 (Conv2D)            (None, 1, 1, 16)     304         ['conv2d_102[0][0]']             

 multiply_17 (Multiply)         (None, 128, 128, 16  0           ['conv2d_103[0][0]',             
                                )                                 'activation_23[0][0]']          

 conv2d_104 (Conv2D)            (None, 128, 128, 3)  435         ['multiply_17[0][0]']            

 add_40 (Add)                   (None, 128, 128, 3)  0           ['conv2d_104[0][0]',             
                                                                  'input[0][0]']                  

==================================================================================================
Total params: 15,669
Trainable params: 15,669
Non-trainable params: 0
__________________________________________________________________________________________________

when I tried to convert this keras model to hls model I think few of the layers like expand layer in tensorflow is not supported by hls4ml so I was getting the following error.

Interpreting Model
---------------------------------------------------------------------------
Exception                                 Traceback (most recent call last)
Cell In[26], line 9
      1 import hls4ml
      3 # hls4ml.model.optimizer.OutputRoundingSaturationMode.layers = ['Activation']
      4 # hls4ml.model.optimizer.OutputRoundingSaturationMode.rounding_mode = 'AP_RND'
      5 # hls4ml.model.optimizer.OutputRoundingSaturationMode.saturation_mode = 'AP_SAT'
----> 9 hls_config = hls4ml.utils.config_from_keras_model(model, granularity='name')
     10 # hls_config['Model']['Precision'] = 'ap_fixed<16,6>'
     11 
     12 ############################################
     14 hls_config['Model']['Strategy'] = 'Resource'

File ~/.local/lib/python3.8/site-packages/hls4ml/utils/config.py:137, in config_from_keras_model(model, granularity, backend, default_precision, default_reuse_factor)
    133     model_arch = json.loads(model.to_json())
    135 reader = hls4ml.converters.KerasModelReader(model)
--> 137 layer_list, _, _ = hls4ml.converters.parse_keras_model(model_arch, reader)
    139 def make_layer_config(layer):
    140     cls_name = layer['class_name']

File ~/.local/lib/python3.8/site-packages/hls4ml/converters/keras_to_hls.py:307, in parse_keras_model(model_arch, reader)
    305 for keras_layer in layer_config:
    306     if keras_layer['class_name'] not in supported_layers:
--> 307         raise Exception('ERROR: Unsupported layer type: {}'.format(keras_layer['class_name']))
    309 output_shapes = {}
    310 output_shape = None

Exception: ERROR: Unsupported layer type: TFOpLambda

My configuration code is as follows:

import hls4ml

# hls4ml.model.optimizer.OutputRoundingSaturationMode.layers = ['Activation']
# hls4ml.model.optimizer.OutputRoundingSaturationMode.rounding_mode = 'AP_RND'
# hls4ml.model.optimizer.OutputRoundingSaturationMode.saturation_mode = 'AP_SAT'

hls_config = hls4ml.utils.config_from_keras_model(model, granularity='name')
############################################

hls_config['Model']['Strategy'] = 'Resource'
hls_config['Model']['ReuseFactor'] = 1000000

hls_config['IOTtype']= 'io_stream'

hls_config['LayerName']['conv2d_94']['ReuseFactor'] = 432
hls_config['LayerName']['conv2d_95']['ReuseFactor'] = 2304
hls_config['LayerName']['conv2d_95']['ReuseFactor'] = 2304
hls_config['LayerName']['conv2d_94']['ReuseFactor'] = 2304
hls_config['LayerName']['conv2d_97']['ReuseFactor'] = 2304
hls_config['LayerName']['conv2d_98']['ReuseFactor'] = 2304
hls_config['LayerName']['conv2d_99']['ReuseFactor'] = 2304
hls_config['LayerName']['conv2d_100']['ReuseFactor'] = 2304
hls_config['LayerName']['conv2d_101']['ReuseFactor'] = 256
hls_config['LayerName']['conv2d_102']['ReuseFactor'] = 288
hls_config['LayerName']['conv2d_103']['ReuseFactor'] = 288
hls_config['LayerName']['conv2d_104']['ReuseFactor'] = 432

hls_config['LayerName']['conv2d_94']['Strategy'] = 'Resource'
hls_config['LayerName']['conv2d_95']['Strategy'] = 'Resource'
hls_config['LayerName']['conv2d_96']['Strategy'] = 'Resource'
hls_config['LayerName']['conv2d_97']['Strategy'] = 'Resource'
hls_config['LayerName']['conv2d_98']['Strategy'] = 'Resource'
hls_config['LayerName']['conv2d_99']['Strategy'] = 'Resource'
hls_config['LayerName']['conv2d_100']['Strategy'] = 'Resource'
hls_config['LayerName']['conv2d_101']['Strategy'] = 'Resource'
hls_config['LayerName']['conv2d_102']['Strategy'] = 'Resource'
hls_config['LayerName']['conv2d_103']['Strategy'] = 'Resource'
hls_config['LayerName']['conv2d_104']['Strategy'] = 'Resource'

###############################################

print("-----------------------------------")
print("Configuration")
plotting.print_dict(hls_config)
print("-----------------------------------")

hls_model = hls4ml.converters.convert_from_keras_model(
        model,
        hls_config=hls_config,
        output_dir="model_pynq_fpn/hls4ml_prj_pynq_fpnrnet_32bit",
        backend="VivadoAccelerator",
        board="pynq-z2",
        io_type='io_stream',
        part='xc7z020-clg400-1'
    )

Is there any way to bipass the layers which are not supported by hls4ml and synthesize the model? or how to overcome this issues since my model has expand layer, multiply and add layer are these layers supported for synthesis. Please help me in this regard. Thank you in Advance.

jmduarte commented 1 year ago

Hi TFOpLambda layers are not (and really cannot be) generically supported because they can really represent anything.

I see you're using them for a simple reshape operation, so instead, you should use a standard Reshape layer.

If in the future you actually need a custom layer, you can use the Extension API to define the corresponding HLS for a custom layer: https://fastmachinelearning.org/hls4ml/advanced/extension.html

sandeep1404 commented 1 year ago

Thank you @jmduarte for the reply.