fastmachinelearning / hls4ml

Machine learning on FPGAs using HLS
https://fastmachinelearning.org/hls4ml
Apache License 2.0
1.19k stars 390 forks source link

Cannot unpack non-iterable NoneType object #756

Closed vandenBergArthur closed 1 year ago

vandenBergArthur commented 1 year ago

Hi all,

To demonstrate the issue, I created the following model:

input_shape_x = (9,25,64)
input_x = Input(shape=input_shape_x, batch_size=1, name='input_x')

self_conv1 = Conv2D(filters=10, kernel_size=1,data_format='channels_last')(input_x)
self_conv2 = Conv2D(filters=10, kernel_size=1,data_format='channels_last')(input_x)

a = Concatenate(axis=-2)([self_conv1, self_conv2])

model = Model(inputs=input_x, outputs=a)

When running config = hls4ml.utils.config_from_keras_model(model, granularity='model') I encounter the following error:

Interpreting Model
Topology:
Layer name: input_x, layer type: InputLayer, input shapes: [[1, 9, 25, 64]], output shape: [1, 9, 25, 64]

---------------------------------------------------------------------------
TypeError                                 Traceback (most recent call last)
Cell In[52], line 1
----> 1 config = hls4ml.utils.config_from_keras_model(model, granularity='model')

File ~/anaconda3/envs/hls4ml_thesis/lib/python3.8/site-packages/hls4ml/utils/config.py:137, in config_from_keras_model(model, granularity, backend, default_precision, default_reuse_factor)
    133     model_arch = json.loads(model.to_json())
    135 reader = hls4ml.converters.KerasModelReader(model)
--> 137 layer_list, _, _ = hls4ml.converters.parse_keras_model(model_arch, reader)
    139 def make_layer_config(layer):
    140     cls_name = layer['class_name']

File ~/anaconda3/envs/hls4ml_thesis/lib/python3.8/site-packages/hls4ml/converters/keras_to_hls.py:349, in parse_keras_model(model_arch, reader)
    346 else:
    347     input_names = None
--> 349 layer, output_shape = layer_handlers[keras_class](keras_layer, input_names, input_shapes, reader)
    351 print(
    352     'Layer name: {}, layer type: {}, input shapes: {}, output shape: {}'.format(
    353         layer['name'], layer['class_name'], input_shapes, output_shape
    354     )
    355 )
    356 layer_list.append(layer)

File ~/anaconda3/envs/hls4ml_thesis/lib/python3.8/site-packages/hls4ml/converters/keras/convolution.py:36, in parse_conv2d_layer(keras_layer, input_names, input_shapes, data_reader)
     32 assert 'Conv2D' in keras_layer['class_name']
     34 layer = parse_default_keras_layer(keras_layer, input_names)
---> 36 (layer['in_height'], layer['in_width'], layer['n_chan']) = parse_data_format(input_shapes[0], layer['data_format'])
     38 if 'filters' in keras_layer['config']:
     39     layer['n_filt'] = keras_layer['config']['filters']

TypeError: cannot unpack non-iterable NoneType object

I noticed that if I simply remove the batch_size=1 argument in input_x, the error is gone. But I don't understand where this error comes from since the batch dimension is often stripped away. (right?)

Any ideas where this problem comes from and how it can be solved?

Kind regards, Arthur

jmduarte commented 1 year ago

@vandenBergArthur thanks for the report.

Just a quick response before I have to look into it later. Keras usually assumes the 1st dimension of the input to the model (i.e. the batch size) can be variable and is encoded as None. This of course still supports sending in inputs with a fixed batch size of 1.

On the hls4ml side, we expect models to be supplied this way (i.e. variable batch size), but then we only create HLS code / synthesize the model with a batch size of 1.

Is there a particular reason you need to specify the fixed batch size of 1 when building the model?

Otherwise, as you point out, the workaround is simply to not supply the batch size when building the model.

Nonetheless, we should be able to fix this in the hls4ml parsing.

vandenBergArthur commented 1 year ago

Hi @jmduarte, thank you for the very swift response!

Is there a particular reason you need to specify the fixed batch size of 1 when building the model?

I was experimenting if I could 'trick' the model into using this batch-dimension for something useful. Originally, I added an extra dimension to my tensor on which I concatenated other tensors. Like this:

input_shape_x = (64,9,25)
input_x = Input(shape=input_shape_x, name='input_x')

# Change the dimensions of the original input graph frame    
a = Permute((2,3,1))(input_x)

self_conv1 = Conv2D(filters=10, kernel_size=1,data_format='channels_last')(a)
self_conv2 = Conv2D(filters=10, kernel_size=1,data_format='channels_last')(a)
self_conv3 = Conv2D(filters=10, kernel_size=1,data_format='channels_last')(a)

# Reshape each 4D tensor to add an extra 5th dimension
self_conv1 = Reshape(target_shape=(9,25,128,1))(self_conv1)
self_conv2 = Reshape(target_shape=(9,25,128,1))(self_conv2)
self_conv3 = Reshape(target_shape=(9,25,128,1))(self_conv3)

# Concatenate the 3 tensors along the 5th dimension
b = Concatenate(axis=-1)([self_conv1, self_conv2, self_conv3])

# Change the dimension order for the correct broadcasting of the adjacency matrix
c = Permute((1,4,3,2))(b)

model = Model(inputs=input_x, outputs=c)

Note: This model is part of a bigger model (that's why I start/end with Permute) and in the meantime I am aware that Permute / transposing these kind of tensors is not supported since your colleague mentioned this in one of my previous posts. (See #746)

But I quickly found out that Concatenation of tensors with rank > 3 is not yet supported. Hence, I tried to use the batch dimension to stack the tensors on. So, I specified batch_size = 1 and tried to concatenate with a = Concatenate(axis=0)([self_conv1, self_conv2]).

I can conclude that using the batch dimension for something else is not a good solution, hence I will close this issue. So you do not need to look into it deeper.

Thanks again for your input!