fastmachinelearning / hls4ml-tutorial

Tutorial notebooks for hls4ml
http://fastmachinelearning.org/hls4ml-tutorial/
274 stars 123 forks source link

hls4ml dosent work for me #39

Open zahraaayii opened 1 year ago

zahraaayii commented 1 year ago

my model is: model = Sequential() model.add(BatchNormalization(input_shape=(1408,1))) model.add(Conv1D(3, kernel_size=(100),strides=2)) model.add(Activation("relu")) model.add(MaxPooling1D(pool_size=(2),strides=2)) model.add(Conv1D(50, (10))) model.add(MaxPooling1D(pool_size=(2),strides=2)) model.add(Activation("relu")) model.add(Conv1D(30, (30))) model.add(MaxPooling1D(pool_size=(2))) model.add(Activation("relu")) model.add(BatchNormalization()) model.add(Flatten()) model.add(Dropout(0.25)) model.add(Dense(4,activation='softmax'))

and my hls4ml configuration:

config = hls4ml.utils.config_from_keras_model(model, granularity='model')

print("-----------------------------------") print("Configuration") print_dict(config) print("-----------------------------------")

hls_model = hls4ml.converters.convert_from_keras_model(model, hls_config=config, output_dir='model_1/hls4ml_prj_2', part='xcvu9p-flgb2104-2-i', io_type='io_stream')

hls_model.build(csim=False)

error [XFORM 203-504] Stop unrolling loop

why ? help me

jmduarte commented 1 year ago

hi @zahraaayii. This typically means the layers you're trying to use are too large for the way hls4ml writes the HLS.

I see your first Conv1D layer has a kernel size of 100 which is quite large.

We have improvements in larger layers in more recent versions of hls4ml (like 0.7.0rc1 that was just released), but I doubt it will work even then.

I would consider if you really need such large kernel sizes, or if they can be reduced.

zahraaayii commented 1 year ago

hi @zahraaayii. This typically means the layers you're trying to use are too large for the way hls4ml writes the HLS.

I see your first Conv1D layer has a kernel size of 100 which is quite large.

We have improvements in larger layers in more recent versions of hls4ml (like 0.7.0rc1 that was just released), but I doubt it will work even then.

I would consider if you really need such large kernel sizes, or if they can be reduced.

can you tell me maximum kernel size that suppurted?is 10 or 30 good? is my filter sizes good?