Closed ghost closed 2 years ago
Are you using the Master branch of hls4ml? If so try out the pip installation please.
Thanks for your reply.I've tried using both, and the problem persisted.
Hi, how bad was the prediction?
Hi, how bad was the prediction?
Hi. the hls_model gives random predictions
Are you using the Master branch of hls4ml? If so try out the pip installation please.
While using the pip installation version (0.5.0), I tried to trace the hls model, but several warnings appeared. I don't know their meaning.
Recompiling myproject with tracing Writing HLS project Done WARNING: Hls::stream 'hls::stream<ap_fixed<16, 6, (ap_q_mode)5, (ap_o_mode)3, 0> >.625' contains leftover data, which may result in RTL simulation hanging. WARNING: Hls::stream 'hls::stream<ap_fixed<16, 6, (ap_q_mode)5, (ap_o_mode)3, 0> >.624' contains leftover data, which may result in RTL simulation hanging. WARNING: Hls::stream 'hls::stream<ap_fixed<16, 6, (ap_q_mode)5, (ap_o_mode)3, 0> >.623' contains leftover data, which may result in RTL simulation hanging. WARNING: Hls::stream 'hls::stream<ap_fixed<16, 6, (ap_q_mode)5, (ap_o_mode)3, 0> >.622' contains leftover data, which may result in RTL simulation hanging.
Are you using the Master branch of hls4ml? If so try out the pip installation please.
While using the pip installation version (0.5.0), I tried to trace the hls model, but several warnings appeared. I don't know their meaning.
Recompiling myproject with tracing Writing HLS project Done WARNING: Hls::stream 'hls::stream<ap_fixed<16, 6, (ap_q_mode)5, (ap_o_mode)3, 0> >.625' contains leftover data, which may result in RTL simulation hanging. WARNING: Hls::stream 'hls::stream<ap_fixed<16, 6, (ap_q_mode)5, (ap_o_mode)3, 0> >.624' contains leftover data, which may result in RTL simulation hanging. WARNING: Hls::stream 'hls::stream<ap_fixed<16, 6, (ap_q_mode)5, (ap_o_mode)3, 0> >.623' contains leftover data, which may result in RTL simulation hanging. WARNING: Hls::stream 'hls::stream<ap_fixed<16, 6, (ap_q_mode)5, (ap_o_mode)3, 0> >.622' contains leftover data, which may result in RTL simulation hanging.
Something went wrong with io_stream. Could you figure out which layer caused the issue? Like check firmware/my_project.cpp
Are you using the Master branch of hls4ml? If so try out the pip installation please.
While using the pip installation version (0.5.0), I tried to trace the hls model, but several warnings appeared. I don't know their meaning. Recompiling myproject with tracing Writing HLS project Done WARNING: Hls::stream 'hls::stream<ap_fixed<16, 6, (ap_q_mode)5, (ap_o_mode)3, 0> >.625' contains leftover data, which may result in RTL simulation hanging. WARNING: Hls::stream 'hls::stream<ap_fixed<16, 6, (ap_q_mode)5, (ap_o_mode)3, 0> >.624' contains leftover data, which may result in RTL simulation hanging. WARNING: Hls::stream 'hls::stream<ap_fixed<16, 6, (ap_q_mode)5, (ap_o_mode)3, 0> >.623' contains leftover data, which may result in RTL simulation hanging. WARNING: Hls::stream 'hls::stream<ap_fixed<16, 6, (ap_q_mode)5, (ap_o_mode)3, 0> >.622' contains leftover data, which may result in RTL simulation hanging.
Something went wrong with io_stream. Could you figure out which layer caused the issue? Like check firmware/my_project.cpp
Hi. Thanks for the reply. I checked firmware/my_project.cpp, but I could not figure out what went wrong. So I tried the CNN tutorial that was provided, and it worked fine. Then I compared the firmware/my_project.cpp of both models. I copied below a portion of the codes to show the difference between them ( the text in bold is appears only in the example). Do you have any idea about what I've been doing wrong? The difference between my model and the one from the tutorial is that I used larger images as inputs and larger reuse factors. I am quite new to this and would appreciate any help you could provide. Thank you very much.
MY MODEL:
// **** // NETWORK INSTANTIATION // ****
//hls-fpga-machine-learning insert layers
hls::stream<layer2_t> layer2_out("layer2_out");
#pragma HLS STREAM variable=layer2_out depth=11470
#pragma HLS STABLE variable=layer2_out
nnet::conv_2d_cl<input_t, layer2_t, config2>(Input_layer, layer2_out, w2, b2); // Conv_0
CNN TUTORIAL:
// **** // NETWORK INSTANTIATION // ****
//hls-fpga-machine-learning insert layers
hls::stream<layer2_t> layer2_out("layer2_out");
#pragma HLS STREAM variable=layer2_out depth=900
#pragma HLS STABLE variable=layer2_out
nnet::conv_2d_cl<input_t, layer2_t, config2>(input_1, layer2_out, w2, b2); // conv_0
***#ifndef SYNTHESIS
nnet::save_layer_output
Hi, I think the c++ files are correct. Maybe is the bit width not enough for your input? Could you upload your project to a repository so that I can understand more?
Hi, I think the c++ files are correct. Maybe is the bit width not enough for your input? Could you upload your project to a repository so that I can understand more?
Thank you very much. I sent you an invitation.
Hi, so I have tested out your project, and here is my conclusion:
hls4ml doesn't support Dropout Layer, it can't like randomly discard the neurons. I tried to print out the outputs before the Dropout layers, and the prediction went well.
The way you set up the hls config would cause some issues. _config = hls4ml.utils.config_from_keras_model(model_pruned, granularity='name') config['Model']['Precision'] = 'apfixed<16,6>'
The config file's granularity is the name, so each layer would have different precision. And I had looked into your input data, 10 bits for integer is not enough. So if you just simply change the code into: _config = hls4ml.utils.config_from_keras_model(model_pruned, granularity='name') config['Model']['Precision'] = 'apfixed<32,16>'
The precision of each layer will still remain <16,6> and therefore cause the wrong predictions. So maybe you should set the granularity = 'model' to modify the precision of the full model _Model Precision: ap_fixed<32,16> ReuseFactor: 1 Strategy: Resource LayerName Input_layer Precision result: ap_fixed<16,6> Conv_0 Precision weight: ap_fixed<16,6> bias: ap_fixed<16,6> result: ap_fixed<16,6> ReuseFactor: 1 Conv_0_linear Precision: ap_fixed<16,6> ReuseFactor: 1 table_size: 1024 table_t: ap_fixed<18,8> BN_conv0 Precision scale: ap_fixed<16,6> bias: ap_fixed<16,6> ReuseFactor: 1 act_Cov_0 Precision: ap_fixed<16,6> ReuseFactor: 1 table_size: 1024 table_t: ap_fixed<18,8> MaxPool2D_0 Precision: ap_fixed<16,6> Conv_1 Precision weight: ap_fixed<16,6> bias: ap_fixed<16,6> result: ap_fixed<16,6> ReuseFactor: 1 Conv_1_linear Precision: ap_fixed<16,6> ReuseFactor: 1 table_size: 1024 table_t: ap_fixed<18,8> BN_conv1 Precision scale: ap_fixed<16,6> bias: ap_fixed<16,6> ReuseFactor: 1 act_Cov_1 Precision: ap_fixed<16,6> ReuseFactor: 1 table_size: 1024 table_t: ap_fixed<18,8> MaxPool2D_1 Precision: ap_fixed<16,6> Conv_2 Precision weight: ap_fixed<16,6> bias: ap_fixed<16,6> result: ap_fixed<16,6> ReuseFactor: 1 Conv_2_linear Precision: ap_fixed<16,6> ReuseFactor: 1 table_size: 1024 table_t: ap_fixed<18,8> BN_conv2 Precision scale: ap_fixed<16,6> bias: ap_fixed<16,6> ReuseFactor: 1 act_Cov_2 Precision: ap_fixed<16,6> ReuseFactor: 1 table_size: 1024 table_t: ap_fixed<18,8> MaxPool2D_2 Precision: ap_fixed<16,6> Conv_3 Precision weight: ap_fixed<16,6> bias: ap_fixed<16,6> result: ap_fixed<16,6> ReuseFactor: 1 Conv_3_linear Precision: ap_fixed<16,6> ReuseFactor: 1 table_size: 1024 table_t: ap_fixed<18,8> BN_conv3 Precision scale: ap_fixed<16,6> bias: ap_fixed<16,6> ReuseFactor: 1 act_Cov_3 Precision: ap_fixed<16,6> ReuseFactor: 1 table_size: 1024 table_t: ap_fixed<18,8> MaxPool2D_3 Precision: ap_fixed<16,6> Dense_0 Precision weight: ap_fixed<16,6> bias: ap_fixed<16,6> result: ap_fixed<16,6> ReuseFactor: 1 Dense_0_linear Precision: ap_fixed<16,6> ReuseFactor: 1 table_size: 1024 table_t: ap_fixed<18,8> BN_dense0 Precision scale: ap_fixed<16,6> bias: apfixed<16,6> ReuseFactor: 1 Interpreting Model
Hi, so I have tested out your project, and here is my conclusion:
1. hls4ml doesn't support Dropout Layer, it can't like randomly discard the neurons. I tried to print out the outputs before the Dropout layers, and the prediction went well. 2. The way you set up the hls config would cause some issues. **_config = hls4ml.utils.config_from_keras_model(model_pruned, granularity='name') config['Model']['Precision'] = 'ap_fixed<16,6>'_**
The config file's granularity is the name, so each layer would have different precision. And I had looked into your input data, 10 bits for integer is not enough. So if you just simply change the code into: _config = hls4ml.utils.config_from_keras_model(model_pruned, granularity='name') config['Model']['Precision'] = 'apfixed<32,16>'
The precision of each layer will still remain <16,6> and therefore cause the wrong predictions. So maybe you should set the granularity = 'model' to modify the precision of the full model _Model Precision: ap_fixed<32,16> ReuseFactor: 1 Strategy: Resource LayerName Input_layer Precision result: ap_fixed<16,6> Conv_0 Precision weight: ap_fixed<16,6> bias: ap_fixed<16,6> result: ap_fixed<16,6> ReuseFactor: 1 Conv_0_linear Precision: ap_fixed<16,6> ReuseFactor: 1 table_size: 1024 table_t: ap_fixed<18,8> BN_conv0 Precision scale: ap_fixed<16,6> bias: ap_fixed<16,6> ReuseFactor: 1 act_Cov_0 Precision: ap_fixed<16,6> ReuseFactor: 1 table_size: 1024 table_t: ap_fixed<18,8> MaxPool2D_0 Precision: ap_fixed<16,6> Conv_1 Precision weight: ap_fixed<16,6> bias: ap_fixed<16,6> result: ap_fixed<16,6> ReuseFactor: 1 Conv_1_linear Precision: ap_fixed<16,6> ReuseFactor: 1 table_size: 1024 table_t: ap_fixed<18,8> BN_conv1 Precision scale: ap_fixed<16,6> bias: ap_fixed<16,6> ReuseFactor: 1 act_Cov_1 Precision: ap_fixed<16,6> ReuseFactor: 1 table_size: 1024 table_t: ap_fixed<18,8> MaxPool2D_1 Precision: ap_fixed<16,6> Conv_2 Precision weight: ap_fixed<16,6> bias: ap_fixed<16,6> result: ap_fixed<16,6> ReuseFactor: 1 Conv_2_linear Precision: ap_fixed<16,6> ReuseFactor: 1 table_size: 1024 table_t: ap_fixed<18,8> BN_conv2 Precision scale: ap_fixed<16,6> bias: ap_fixed<16,6> ReuseFactor: 1 act_Cov_2 Precision: ap_fixed<16,6> ReuseFactor: 1 table_size: 1024 table_t: ap_fixed<18,8> MaxPool2D_2 Precision: ap_fixed<16,6> Conv_3 Precision weight: ap_fixed<16,6> bias: ap_fixed<16,6> result: ap_fixed<16,6> ReuseFactor: 1 Conv_3_linear Precision: ap_fixed<16,6> ReuseFactor: 1 table_size: 1024 table_t: ap_fixed<18,8> BN_conv3 Precision scale: ap_fixed<16,6> bias: ap_fixed<16,6> ReuseFactor: 1 act_Cov_3 Precision: ap_fixed<16,6> ReuseFactor: 1 table_size: 1024 table_t: ap_fixed<18,8> MaxPool2D_3 Precision: ap_fixed<16,6> Dense_0 Precision weight: ap_fixed<16,6> bias: ap_fixed<16,6> result: ap_fixed<16,6> ReuseFactor: 1 Dense_0_linear Precision: ap_fixed<16,6> ReuseFactor: 1 table_size: 1024 table_t: ap_fixed<18,8> BN_dense0 Precision scale: ap_fixed<16,6> bias: apfixed<16,6> ReuseFactor: 1 Interpreting Model
Hi. Sorry to bother you again. Did you get accurate predictions when you tested the model? I removed the dropout layer and changed the precision, but the accuracy has not improved. The baseline_model accuracy is around 99%, while the hls_model accuracy is only about 30%.
Could you update your testing code to your repository so I can have a look?
Could you update your testing code to your repository so I can have a look?
Hi. I updated the testing code.
Hi could you pls tell me how the problem was solved at last?
How was it solved?
Hello Guys, I've been trying to convert my CNN model with HLS4ML, but I'm facing some issues. After converting the model from Keras, hls_model.predict() gives wrong predictions. I tried to trace the model, but it only returns the outputs of the last two layers. Do you guys have any idea about what I've been doing wrong? I really appreciate any help you can provide.
I've been using HLS4ML 0.5.1 and my config is this:
`hls4ml.model.optimizer.OutputRoundingSaturationMode.layers = ['Activation'] hls4ml.model.optimizer.OutputRoundingSaturationMode.rounding_mode = 'AP_RND' hls4ml.model.optimizer.OutputRoundingSaturationMode.saturation_mode = 'AP_SAT'
config = hls4ml.utils.config_from_keras_model(model, granularity='name')
for layer in config['LayerName'].keys(): config['LayerName'][layer]['Trace'] = True
config['LayerName']['Conv_1']['ReuseFactor']=1 config['LayerName']['Conv_2']['ReuseFactor']=1 config['LayerName']['Conv_3']['ReuseFactor']=2 config['LayerName']['Conv_4']['ReuseFactor']=4 config['LayerName']['Dense_1']['ReuseFactor']=20 config['LayerName']['Dense_2']['ReuseFactor']=10 config['LayerName']["output_softmax"]['ReuseFactor']=1 config['LayerName']["output_softmax"]['Strategy'] = 'Stable'
config['Model']['ReuseFactor']=64
config['Model']['Strategy']='Resource' print("-----------------------------------") print("Configuration") plotting.print_dict(config) print("-----------------------------------")
cfg = hls4ml.converters.create_config() cfg['IOType'] = 'io_stream' # Must set this if using CNNs! cfg['HLSConfig'] = config cfg['KerasModel'] = model cfg['OutputDir'] = 'model_1/hls4ml_prj' cfg['XilinxPart'] = 'xczu9eg-ffvb1156-2-e' cfg['Backend']='VivadoAccelerator' cfg['Board']="zcu102" hls_model = hls4ml.converters.keras_to_hls(cfg) hls_model.compile() ` My model is very simple and looks like this: