Open zhouwg opened 3 months ago
Hello, if you provide me with more information, I could help you. If you could provide a screenshot of the error, it would be even better, especially if it's a copy of the CLI output to identify the type of error. If it generates an image but it's not the one you expected, please attach both the result and the expected image.
Hello, if you provide me with more information, I could help you. If you could provide a screenshot of the error, it would be even better, especially if it's a copy of the CLI output to identify the type of error. If it generates an image but it's not the one you expected, please attach both the result and the expected image.
thanks for you quickly and warmly comment.
the issue is SD process crashes on Xiaomi 14:https://github.com/zhouwg/kantv/blob/master/core/ggml/jni/ggml-jni-impl.cpp#L905
this personal AI study project is an Android turn-key project, you can reproduce this issue very easily on Xiaomi 14 or any other mainstream Android phone(modify this line accordingly:https://github.com/zhouwg/kantv/blob/master/core/ggml/CMakeLists.txt#L16).
Reviewing your comment again, I suggest you do not use the latest version of ggml (Anyway, there is no significant improvement in any aspect) as it introduces many changes that may end up breaking some of the code in sd.cpp. Use the one that is default in the master branch instead.
this personal AI study project is an Android turn-key project, you can reproduce this issue very easily on Xiaomi 14 or any other mainstream Android phone.
I will try to test it, I hope it compiles on the first try since my time is very limited. I think it could also be because the model is very heavy, and Android struggles with handling large data sizes. I suggest you try using q4_0 quantization.
Reviewing your comment again, I suggest you do not use the latest version of ggml (Anyway, there is no significant improvement in any aspect) as it introduces many changes that may end up breaking some of the code in sd.cpp. Use the one that is default in the master branch instead.
thanks
this personal AI study project is an Android turn-key project, you can reproduce this issue very easily on Xiaomi 14 or any other mainstream Android phone.
I will try to test it, I hope it compiles on the first try since my time is very limited. I think it could also be because the model is very heavy, and Android struggles with handling large data sizes. I suggest you try using q4_0 quantization.
Google's gemma model can run well on Xiaomi 14 using llama.cpp.
I'll try q4_0 quantization later. thanks so much.
Hi,
Thanks for your amazing stable-diffusion.cpp.
I tried to integrate it to personal study project but it doesn't work properly as expected on Xiaomi 14.
https://github.com/zhouwg/kantv/blob/master/core/ggml/jni/ggml-jni-impl.cpp#L905
https://github.com/zhouwg/kantv/blob/master/core/ggml/jni/ggml-jni-impl-external.cpp#L1866
failed on this function:
https://github.com/zhouwg/kantv/blob/master/core/ggml/stablediffusioncpp/model.cpp#L742
btw, the latest upstream GGML source code was used during the integration process. the following two lines should be modified accordingly:
https://github.com/zhouwg/kantv/blob/master/core/ggml/stablediffusioncpp/model.cpp#L568
https://github.com/zhouwg/kantv/blob/master/core/ggml/stablediffusioncpp/model.cpp#L601
the model is come from:
it works well on Ubuntu20.04.