Open rkovachfuentes opened 11 months ago
@jmitrevs roping you in on this one :)
I will take a look, but one possibility is to use the ExtensionAPI (https://fastmachinelearning.org/hls4ml/advanced/extension.html) to add support for the layer.
Since this is a basic layer in keras (slicing another layer's output), the extension api looks like a good workaround but not the correct solution.
hi @lgray, @rkovachfuentes, I made a simpler minimal example:
from tensorflow.keras.layers import Input, Concatenate
from tensorflow.keras.models import Model
import hls4ml
def createModel(shape=(3, 3, 2)):
x = x_in = Input(shape)
xy = Concatenate(axis=1)([x[...,:1], x[...,1:]])
model = Model(inputs=x_in, outputs=xy)
return model
model = createModel()
model.save('model.h5')
config = hls4ml.utils.config_from_keras_model(model, granularity='name')
https://gist.github.com/jmduarte/08eee28b979c60cfac513b0307b41d4d#file-test-py-L1-L13
This simple model gets two SlicingOpLambda
operators.
Up until now, we haven't tried to handle any Lambda
or TFOp
layers in Keras models because, in general, they can be quite arbitrarily defined. That being said, I can imagine supporting a subset of slicing operators, especially since they are frequently needed.
The easiest way I can imagine to implement this is to parse these layers and then insert an HLS function that just slices the input to get the required output. This will never be optimal because you just insert some latency to create a new output with fewer values.
Also, if like in this example, you actually are rearranging the data, that also can always be handled more efficiently by rewriting your model or using a custom layer to do the rearrangement precisely the way you want.
So from a hardware-algorithm codesign perspective, there will always be a better way to do it than my simple generic implementation. So it's more complicated than it looks at first glance.
I'm happy to chat about it more, especially to find out more about your specific model's needs. I think there are two separate issues with probably two separate solutions.
(1) Supporting slicing in general (with potentially suboptimal performance) (2) Supporting and getting good performance for your specific model
Maybe @vloncar has thoughts as well.
Ah this is already super useful! There's definitely the possibility for remediation on the model design. This is the first one we got to train well with a rather small number of parameters.
I'll send you a mail for a chat with @rkovachfuentes, and Jennet. We can walk you through what we're up to.
Prerequisites
Please make sure to check off these prerequisites before submitting a bug report.
Quick summary
Please give a brief and concise description of the bug.
Unsupported Layer Type error is raised when trying to configure qkeras model (using hls4ml.utils.config_from_keras_model).
Details
Please add to the following sections to describe the bug as accurately as possible.
Steps to Reproduce
Add what needs to be done to reproduce the bug. Add commented code examples and make sure to include the original model files / code, and the commit hash you are working on.
Clone the hls4ml repository
Checkout the master branch, with commit hash: commit dd18adb1d3fb1ac3bf18c2b7feb37f44c10b6262
Reload qkeras model from h5, using:
model = qkeras.utils.load_qmodel('/home/rkovachf/hls4ml-tutorial/hls4mltest.h5',custom_objects)
Model architecture is described below:
Error occurs:
File ~/.conda/envs/hls4ml-tutorial/lib/python3.10/site-packages/hls4ml/utils/config.py:138, in config_from_keras_model(model, granularity, backend, default_precision, default_reuse_factor) 134 model_arch = json.loads(model.to_json()) 136 reader = hls4ml.converters.KerasModelReader(model) --> 138 layerlist, , _ = hls4ml.converters.parse_keras_model(model_arch, reader) 140 def make_layer_config(layer): 141 cls_name = layer['class_name']
File ~/.conda/envs/hls4ml-tutorial/lib/python3.10/site-packages/hls4ml/converters/keras_to_hls.py:226, in parse_keras_model(model_arch, reader) 224 for keras_layer in layer_config: 225 if keras_layer['class_name'] not in supported_layers: --> 226 raise Exception('ERROR: Unsupported layer type: {}'.format(keras_layer['class_name'])) 228 output_shapes = {} 229 output_shape = None
Exception: ERROR: Unsupported layer type: SlicingOpLambda