Closed nhphuong91 closed 2 years ago
The tar ball used to install the xclbins should create the subdirectories named as DPUCAHX8H, and similar.
what tar.gz file did you use?
@bryanloz-xilinx I simply follow the instruction here (after started the docker): https://github.com/Xilinx/Vitis-AI/tree/v1.4.1/setup/alveo#dpu-ip-selection
cd /workspace/setup/alveo
source setup.sh DPUCAHX8H
That's all. No download needed.
Hi @nhphuong91 ,
The /opt/xilinx/overlaybins
directory inside Vitis AI CPU/GPU Docker is mapped from the host's directory -
Vitis-AI /workspace > tree -L 2 /opt/xilinx/overlaybins/
/opt/xilinx/overlaybins/
├── DPUCAHX8H
│ ├── dpu_DPUCAHX8H_10PE275_xilinx_u50lv_gen3x4_xdma_base_2.xclbin
│ ├── dwc
│ ├── md5sum_DPUCAHX8H.txt
│ └── waa
└── DPUCVDX8H
├── 6pedwc
└── 8pe
6 directories, 2 files
Perhaps you are using an older version of the Alveo setup.sh script?
The v2.0 release script works fine
(vitis-ai-pytorch) Vitis-AI /workspace/setup/alveo > source setup.sh DPUCAHX8H
------------------
VAI_HOME = /vitis_ai_home
------------------
XILINX_XRT : /opt/xilinx/xrt
PATH : /opt/xilinx/xrt/bin:/opt/xilinx/xrm/bin:/opt/xilinx/xrt/bin:/opt/vitis_ai/conda/envs/vitis-ai-pytorch/bin:/opt/vitis_ai/conda/bin:/opt/vitis_ai/conda/condabin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
LD_LIBRARY_PATH : /opt/xilinx/xrt/lib:/opt/xilinx/xrm/lib:/opt/xilinx/xrt/lib:/opt/xilinx/xrt/lib:/usr/lib:/usr/lib/x86_64-linux-gnu:/usr/local/lib:/opt/vitis_ai/conda/envs/vitis-ai-tensorflow/lib
PYTHONPATH : /opt/xilinx/xrt/python:/opt/xilinx/xrt/python:
---------------------
XILINX_XRT = /opt/xilinx/xrt
---------------------
XILINX_XRM : /opt/xilinx/xrm
PATH : /opt/xilinx/xrm/bin:/opt/xilinx/xrt/bin:/opt/xilinx/xrm/bin:/opt/xilinx/xrt/bin:/opt/vitis_ai/conda/envs/vitis-ai-pytorch/bin:/opt/vitis_ai/conda/bin:/opt/vitis_ai/conda/condabin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
LD_LIBRARY_PATH : /opt/xilinx/xrm/lib:/opt/xilinx/xrt/lib:/opt/xilinx/xrm/lib:/opt/xilinx/xrt/lib:/opt/xilinx/xrt/lib:/usr/lib:/usr/lib/x86_64-linux-gnu:/usr/local/lib:/opt/vitis_ai/conda/envs/vitis-ai-tensorflow/lib
---------------------
XILINX_XRM = /opt/xilinx/xrm
---------------------
---------------------
LD_LIBRARY_PATH = /opt/xilinx/xrm/lib:/opt/xilinx/xrt/lib:/opt/xilinx/xrm/lib:/opt/xilinx/xrt/lib:/opt/xilinx/xrt/lib:/usr/lib:/usr/lib/x86_64-linux-gnu:/usr/local/lib:/opt/vitis_ai/conda/envs/vitis-ai-tensorflow/lib
---------------------
[0000:3b:00.1] : xilinx_u50lv_gen3x4_xdma_base_2 user(inst=128)
u50lv_ card detected
---------------------
XCLBIN_PATH = /opt/xilinx/overlaybins/DPUCAHX8H
XLNX_VART_FIRMWARE = /opt/xilinx/overlaybins/DPUCAHX8H/dpu_DPUCAHX8H_10PE275_xilinx_u50lv_gen3x4_xdma_base_2.xclbin
---------------------
@hanxue, I'm using Alveo U50 now, but it seems VAI 2.0 doesn't support it anymore. Cannot find any supported DPU for U50, only U50LV
Yes, U50LVis supported, U50 not supported in Vitis AI 2.0
Any plan to support U50 in the future version of VAI or are you planning to abandon it?
Sorry, I don;t have an answer to U50 support. Will ask colleagues to comment on VAI product roadmap and platform support.
Hi @nhphuong91 ,
If you are running Vitis AI 2.0 on U50 card, then you will need to generate the .xclbin file yourself from .xo file. Therefore, it is recommended that you use Vitis AI 1.4.1 on the U50 card.
Note the following:
@wanghy-xlnx I was able to use VAI 1.4.1 on the U50. And the setup script has an issue where I suggest a temporary solution -> all was mentioned in this ticket. I just want to know if you have plan to support U50 in the future version of VAI or are you planning to drop support for U50 in favor of other products such as U50LV?
Hi @nhphuong91 ,
We will continue to support the U50, including U280 and other cards. However, from Vitis AI 2.0, the supported method has changed from pre-compiling to publishing XO files to facilitate more flexible use and integration by users. This approach allows users to customize the resources of the DPU, as well as integrate with other IP.
In Vitis AI 2.0, U50LV, U55C, etc. provide pre-compiled DPUs and models, which is helpful for users to quickly start and see the execution results. However, this usage method is fixed. If you want to customize the DPU resources, or integrate with other IP, you still need to use the XO file to regenerate the DPU's xclbin and the corresponding models.
PS: The XO release URL is https://github.com/Xilinx/Vitis-AI/tree/master/dsa/DPUCAHX8H-XO
Hi, I found a bug with
setup/alveo/setup.sh
. When following instruction to setup environment (for Alveo U50) after entering docker container, I notice that the last 2 env variables are wrong:The structure of
/opt/xilinx/overlaybins/
has changed as below:Can anyone check and fix it? Thanks!