Closed dz333 closed 7 years ago
If you generate pipeline_zynq.c by compile_to_zynq_c() you will see it is a series of runtime API calls which are defined in apps/hls_examples/hls_support/HalideRuntimeZynq.cpp. And they use Linux device driver to allocate buffer and activate DMA. DMA converts your image in memory to AXI-S and pushes to your accelerator. See https://github.com/stevenbell/zynqbuilder for device driver and Viviado design build or https://github.com/kevinkim06/zynqbuilder which I updated little bit including https://github.com/kevinkim06/zynqbuilder/blob/master/vivado_project/mkproject_v2017_2_hwacc_only.tcl.
I've managed to get hls synthesis working and integrated into a project; however I'm not quite sure how to format my images in order for the accelerator to actually run them.
In the 'run.cpp' code the BufferMinimal type is used, but I don't believe I can simply serialize that over to the FPGA since the hls_target expects an hls::stream<AXIPackedStencil<T...>> as an argument.
Do you have any examples of apps formatting data for use by the hls block? Or can you recommend how I can use halide libraries to convert my image into a stream of AXIPackedStencils?
N.B. I'm attempting this with the Gaussian example.
Thanks.