fastmachinelearning / hls4ml

Machine learning on FPGAs using HLS
https://fastmachinelearning.org/hls4ml
Apache License 2.0
1.27k stars 409 forks source link

FIFO resource consumption #524

Open ekellim opened 2 years ago

ekellim commented 2 years ago

Hi,

I've been working with hls4ml to synthesize a model for the ZCU104. After quantization aware training with qkeras (2 bits), I convert the model to hls4ml and run the synthesis. However, when I check the resource consumption in Vivado HLS, I notice that the FIFO resource consumption is huge, especially for BRAM (see image).

I saw that this topic is also adressed in #509. First of all thanks for the effort, but following the example doesn't seem to work for me. Any advise on how to solve this? Should I try a specific branch or commit? I'm happy to provide more information if necessary.

image

thesps commented 2 years ago

It looks like this does indeed touch the same issue that #509 addresses. There is still some more work needed before that PR gets merged, it doesn't fit perfectly right now in the hls4ml flow and will take a little longer to be ready.

I've been working with hls4ml to synthesize a model for the ZCU104

BTW, on this, did you use the zcu104 branch of hls4ml, or something else?

ekellim commented 2 years ago

Ok, thanks for the answer. I used the latest official release (0.6.0). I was not aware of this branch, will definitely try it. What's it called?

thesps commented 2 years ago

This is the branch: https://github.com/thesps/hls4ml/tree/zcu104 I haven't made a PR yet since I don't have a ZCU104 to really validate it, but it's close enough to the ZCU102 that I think it works.