fastmachinelearning / hls4ml-tutorial

Tutorial notebooks for hls4ml
http://fastmachinelearning.org/hls4ml-tutorial/
303 stars 133 forks source link

'xcku115-flvb2104-2-i' #56

Open bubenik98 opened 1 year ago

bubenik98 commented 1 year ago

Hello! I am not an expert, but I am trying to run a notebook inside this docker container. I have 2 issues.

  1. When I run the "build" function, I get this output: ERROR: [HLS 200-70] Part 'xcku115-flvb2104-2-i' is not installed. command 'ap_source' returned error code while executing "source build_prj.tcl" ("uplevel" body line 1) invoked from within "uplevel #0 [list source $arg] " `import hls4ml from hls4ml.converters import convert_from_keras_model import plotting

Then the QKeras model

'''hls4ml.model.optimizer.OutputRoundingSaturationMode.layers = ['Activation'] hls4ml.model.optimizer.OutputRoundingSaturationMode.rounding_mode = 'AP_RND' hls4ml.model.optimizer.OutputRoundingSaturationMode.saturation_mode = 'AP_SAT' '''

reuse_model=256

q_hls_config = hls4ml.utils.config_from_keras_model(qmodel_pruned, granularity='name') q_hls_config['Model']['ReuseFactor'] = reuse_model q_hls_config['Model']['Precision'] = 'ap_fixed<16,6>' q_hls_config['Model']['Strategy'] = 'Resource'

q_hls_config['LayerName']['output_softmax']['Strategy'] = 'Resource'

q_hls_config['LayerName']['Input_layer']['Precision'] = 'ap_fixed<16,16>'

for layer in qmodel_pruned.layers: if ('CONV' in layer.name.upper()) or ('DENSE' in layer.name.upper()): q_hls_config['LayerName'][layer.name]['ReuseFactor'] = reuse_model

if 'POOL' in layer.name.upper():

#    q_hls_config['LayerName'][layer.name]['Precision']='ap_fixed<32,16>'

q_hls_config['LayerName']['output_dense']['ReuseFactor'] = reuse_model q_hls_config['LayerName']['output_softmax']['ReuseFactor'] = reuse_model q_hls_config['LayerName']['output_dense_linear']['ReuseFactor'] = reuse_model

q_hls_config['LayerName']['output_dense_linear']['ReuseFactor'] = reuse_model

q_cfg = hls4ml.converters.create_config(backend='Vivado') q_cfg['IOType'] = 'io_stream' # Must set this if using CNNs! q_cfg['HLSConfig'] = q_hls_config q_cfg['KerasModel'] = qmodel_pruned q_cfg['OutputDir'] = 'q_cnn_pruned/' q_cfg['XilinxPart'] = 'xczu7ev-ffvc1156-2-e'

q_hls_model= hls4ml.converters.keras_to_hls(q_cfg) q_hls_model.compile()

'''q_hls_model_test = convert_from_keras_model( qmodel_pruned, hls_config=q_cfg, output_dir='model_final/hls4ml_model', part='xcu250-figd2104-2L-e' )''' print("----------------------------------------------------------------------------------")

q_hls_model_test.compile()

print("---------------------------") os.environ['PATH'] = os.environ['XILINX_VIVADO'] + '/bin:' + os.environ['PATH'] q_hls_model.build(csim=False, synth=True, vsynth=True) `

  1. I saw in tutorial part 6 you use the keras_to_hls function, but in all other parts, you use the convert_from_keras_model. Why? And When I tried to convert my CNN model by the second one, holping this could magically fix the problem, the compilation runs forever. Anyone could help me, please??
isledge commented 1 year ago

If you do a grep, grep -Ril "xcku115-flvb2104-2-i" ./, in your top-level working directory, then you'll find that there are two relevant files in your project directory that are produced during compilation that contain this part number, hls4ml_config.yml and project.tcl.

To easily fix the issue that you're having, you will need to modify those two files after you have compiled and before you build your model. Open hls4ml_config.yml and change the line Part: xcku115-flvb2104-2-i to Part: xcu250-figd2104-2L-e. Likewise, open project.tcl and change the line set part "xcku115-flvb2104-2-i" to set part "xcu250-figd2104-2L-e". Save both files. You should then be able to build your model.

Most likely, what happened is that someone set some default target device within hls4ml.converters.keras_to_hls() and forgot to allow it to be overwritten. Alternatively, they may have forced the function call to ignore alternate target devices so that you're unlikely to run out of LUTs, DSPs, etc. when synthesizing convolutional networks. Such networks are far more resource intensive than the feed-forward networks considered in the other tutorials.

As an aside, you don't have to necessarily use the xcu250-figd2104-2L-e FPGA. If you run Vivado in the terminal, vivado -mode tcl, and then execute get_parts, you'll see all of the available FPGAs that you can use. I just chose this particular FPGA since it was used in other tutorials.

If you want to avoid this issue in the future, and without having to constantly modify the project files after compilation, then the best solution would be to install additional devices using the command line:

https://support.xilinx.com/s/article/60112?language=en_US

sachinkum0009 commented 1 year ago

@isledge thanks for the answer

Open hls4ml_config.yml and change the line Part: xcku115-flvb2104-2-i to Part: xcu250-figd2104-2L-e. Likewise, open project.tcl and change the line set part "xcku115-flvb2104-2-i" to set part "xcu250-figd2104-2L-e". Save both files. You should then be able to build your model.

It solves the problem. :heart: