Open mako443 opened 4 months ago
Thank you for reporting this. I can confirm that I see the same behavior, although it does eventually finish for me after 10 minutes. I do not see this behavior when profiling via AI Hub and I do not yet know what the difference is. I thought it might be the htp graph optimization type, but even setting that to the slowest on AI Hub is super fast.
I'll keep digging.
I have forwarded this to the appropriate internal team at Qualcomm. I will let you know when I hear something.
@gustavla Thank your for following this - let me know if you get any feedback or if I can offer any help!
Describe the bug I have a model with only layers like Conv2d, LeakyReLU, Sigmoids and element-wise multiplications. The model converts successfully, here is an example job: https://app.aihub.qualcomm.com/jobs/jegn9nk5o/ However, running the model on device (Vivo X90 Pro+ with 8550 chipset) through qnn-net-run gets stuck. If I run the model through the qnn-pytorch-converter it converts successfully and I can run it on device.
TorchScript can be downloaded here: https://drive.google.com/file/d/19Y2H8TyqEsJ_QbNISDBA98b_6RCYa38k/view?usp=share_link
To Reproduce
np.random.rand(256,256,1).astype(np.float32).tofile("input.raw")
and a input list withecho input.raw > input.txt
qnn-net-run --backend libQnnHtp.so --model job_jegn9nk5o_optimized_so_m7n1evpm5.so --input_list input.txt
Expected behavior The qnn-net-run command should exit after a few seconds, generating an output/Result_0 folder.
Stack trace Stack trace with cancellation after a minute:
Host configuration: