Open jjsuperpower opened 1 year ago
Hi @jjsuperpower,
I gave it a shot on my side, modified the prj_config in the boards/zcu104 folder as follows:
[clock]
freqHz=300000000:DPUCZDX8G_1.aclk
freqHz=600000000:DPUCZDX8G_1.ap_clk_2
#freqHz=300000000:DPUCZDX8G_2.aclk
#freqHz=600000000:DPUCZDX8G_2.ap_clk_2
[connectivity]
sp=DPUCZDX8G_1.M_AXI_GP0:HPC0
sp=DPUCZDX8G_1.M_AXI_HP0:HP0
sp=DPUCZDX8G_1.M_AXI_HP2:HP1
#sp=DPUCZDX8G_2.M_AXI_GP0:HPC0
#sp=DPUCZDX8G_2.M_AXI_HP0:HP2
#sp=DPUCZDX8G_2.M_AXI_HP2:HP3
nk=DPUCZDX8G:1
[advanced]
misc=:solution_name=link
#param=compiler.addOutputTypes=sd_card
#param=compiler.skipTimingCheckAndFrequencyScaling=1
[vivado]
prop=run.impl_1.strategy=Performance_Explore
#param=place.runPartPlacer=0
Which looks very much like your config. Replacing the dpu.bit, .hwh, .xclbin files in /usr/local/share/pynq-venv/lib/python3.10/site-packages/pynq_dpu with the new single core versions, I was able to run the mnist example notebook without issue.
I'm using Vitis 2022.1 however. Maybe switching Vitis versions is an option? I believe 2022.1 is better supported for Vitis AI 2.5.
Thank you for looking into it. I will go ahead and try out your suggestion and see if downgrading Vitis makes a difference.
I ran the DPU build with Vitis 2022.1 and unfortunately I still have the same error as before. Do you have any other ideas why I am having this error?
I'm not sure if this is helpful, but here is the dpu.xclbin.info file that was generated from the build. Also, here is the block design generated by Vitis (located in DPU-PYNQ/boards/zcu104/binary_container_1/link/vivado/vpl/prj
).
Hi @jjsuperpower, did you find a solution? If so, please feel free to share what it was and/or close out this issue. Thanks.
Yes, I was able to solve the problem. Apologies for forgetting to reply. The issue I was having was my .xclbin
file was not being copied to /usr/lib/dpu.xclbin
.
Here is why I missed this small but very important detail. The download
method of DPUOverlay
calls the copy_xclbin
method to copy a custom_overlay.xclbin
to /usr/lib/dpu.xclbin
. My assumption was the download
method is designed to be called when swapping out overlays (or reloading an overlay), but was not needed when loading a new overlay. So what I was doing wrong is initializing DPUOverlay
class and then calling the load_model
method. This meant the .xclbin
file was never being copied and VART/XRTs drivers were not able to properly communicate with the hardware causing the kernel to crash.
Is there a reason the __init__
method of DPUOverlay
does not call copy_xclbin
? Would it cause problems to add it? To me, the current implementation seems a bit unintuitive as the __init__
method will load the bitstream but not copy the .xclbin
file. At the very least could documentation be added to prevent someone else from having the same issue?
Thanks, jj
@jjsuperpower, thanks a bunch that's really useful to know. I made a PR that should address this issue in the future https://github.com/Xilinx/DPU-PYNQ/pull/111
Long story short - I believe that your xclbin was getting downloaded to /usr/lib, however it wasn't replacing dpu.xclbin, it was saved as /usr/lib/custom_overlay.xclbin. The way VART knows where to look for this firmware is via the /etc/vart.conf file, which is hardcoded to dpu.xclbin (or whatever default the Vitis AI petalinux flow leaves it at). The change in the PR will overwrite that config file as part of the xclbin download.
I am having an issue modifying the ZCU104 example to have only one DPU core instead of two. I am using Vivado and Vitis 2022.2. And I am using XRT build version 2.12.0. For testing, I have been using the code from dpu_mnist_classifier.ipynb script along with the provided xmodel. I am able to successfully compile and run the DPU build example, however when I modify the prj_config file to reduce the DPU cores from two to one I am experiencing an error at runtime. I have been struggling with this error for a few weeks, so any help or suggestions will be much appreciated.
Modified prj_config
Runtime Error