Closed gasgallo closed 1 year ago
What's your machine?
The model don't has weight and can't be run. Could you upload a correct one?
@jxt1234 I'm using Xiaomi mi9 for my deployment.
Here's another model (with weights), the architecture is a bit different, but the error/problem is exactly the same: link
@jxt1234 any update?
I've also noticed that MNN_GPU_MEMORY_BUFFER
gives wrong results on other devices as well. For example with rk3399
(with MALI GPU), MNN_GPU_MEMORY_BUFFER
is picked by default but the model output is always wrong and not only with the model that I've already shared, but also with all the other different models that I've tested as well. This happens only with Precision_Low
, if I set Precision_High
, then the results are correct, but the inference time is very slow.
I hope this information can help finding the issue, feel free to let me know if I can help in any other way.
Sorry. Your model link I can't open. Could you send it to my email?
@jxt1234 the file is too big for email. I've uploaded in my github: https://github.com/gasgallo/model-zoo/blob/master/MNN/test.mnn. You should be able to download it from there.
这个问题解决了吗? @wangzhaode
I'm using MNN
1.2.2
to run my model on Android. When using openCL backend, I get some errors from openCL and the result from the model is incorrect. When using CPU backend, everything works fine.I get errors like:
Sample output from CPU backend (correct):
And openCL backend (wrong):
If I use
MNN_GPU_MEMORY_BUFFER
, there are no openCL errors, but output is allNaN
s:This is the model that you guys can test to reproduce the problem. This problem happens for other similar models as well.
Thanks for the help!