Closed balintbarna closed 3 years ago
What I found out about the caffe example, is that the target names in arch.json changed in 1.3. I've tried with different target names, but this version only compiles to .xmodel, which does not seem to be supported by dpu-pynq yet. I am not sure about the tensorflow models. Using v1.2 of the docker image seems to be the way to go, I could actually actually build a caffe and tf model, however the paths of the models needed to be modified in compile.sh for them.
What I found out about the caffe example, is that the target names in arch.json changed in 1.3. I've tried with different target names, but this version only compiles to .xmodel, which does not seem to be supported by dpu-pynq yet. I am not sure about the tensorflow models. Using v1.2 of the docker image seems to be the way to go, I could actually actually build a caffe and tf model, however the paths of the models needed to be modified in compile.sh for them.
Hi, may I know how did you manage to get it to work on v1.2 docker image? I'm assuming you ran this command ./docker_run.sh xilinx/vitis-ai-cpu:1.1.56
instead of ./docker_run.sh xilinx/vitis-ai-cpu:latest
? I used this link to check for image.
Subsequently I tried to run ./compile.sh Ultra96 cf_inceptionv1_imagenet_224_224_3.16G
but had no luck in compiling :(
I am also getting .xmodel as output for latest version. ><
@Jefferson111 I used the GPU image xilinx/vitis-ai:1.2.82 so I cannot speak for vitis-ai-cpu:1.1.56 but compile.sh required some modifications There were 2 issues, one is the curly braces, the other is that each version of the models have a different folder structure. check the branches here https://github.com/balintmaci/DPU-PYNQ There's separate branchesfor v1.0 and v1.2 models
resolved by pull request #25
Hello! I'm trying to compare performance between an Ultra96-V2 board and a desktop GPU running the same network and model. To achieve that on the board, I am trying to compile an item from the Vitis AI model zoo to an *.elf file for the DPU.
I followed the guide in host/README.md up until the point where I am supposed to run compile.sh in the docker container:
The first error I got was
./compile.sh: line 20: MODEL_NAME: command not found
. Then I opened the file with vim and found that on line 20 and 21()
was used instead of{}
. I corrected that and the script progressed.The next issue was
Error: currently only caffe and tensorflow are supported.
so I tried the suggested example instead:So I decided to try any tensorflow model from the zoo:
So I decided to try a different model and got this:
So I guess my question is am I doing something wrong? Would I have different results with the GPU image or it's unrelated? How could I generate the .elf files to use with the DPU?