Open mutagenspree opened 2 years ago
I have also found similar issue. When trying to run compiled models inside a vitis-ai docker with version 2.0 or 2.5, it will get stuck when executing the following line of code:
g = xir.Graph.deserialize(model)
(Python API)
auto graph = xir::Graph::deserialize(argv[1]);
(C/C++ API)
It seems that there may be some problem when trying to deserialize the graph of the model.
The xclbin needs an update to run 2.5 models on F1 instance. We are working on it and will release another AMI soon.
Any update on this issue?
It is still under release progress. I will update it to you right after the release.
@fanz-xlnx Is When will the new AMI be released? It seems that this has been taking for more than a month.
By the way, would you recommend to purchase our own FPGA machine? If so, do you have any recommended system to purchase?
Sorry for the delay. It has longer legal process after AMD acquisition. The FPGA selection is totally based on your application and specific requirement. Please leave your information here https://www.xilinx.com/about/contact/contact-sales.html and our sales colleague will help you out.
@fanz-xlnx Is there any progress on this issue?
I can successfully run examples only at version 1.4.1. I've tried version 2.5 with model tf_inceptionv1_imagenet_224_224_3G_2.5, quantized and compiled the model, and build inception_example file I've run
./inception_example ./out/tf_inception_v1_compiled.xmodel
and running stucked afterI0729 09:04:18.042336 1411 main.cc:293] create running for subgraph: subgraph_InceptionV1/InceptionV1/Conv2d_1a_7x7/Conv2D
I had activated vitis-ai-tensorflow and run setup.sh script That's happening with all examples. Do Vitis-AI v2.5 work on aws f1 instances?