fszontagh / sd.cpp.gui.wx

Stable Diffusion GUI written in C++
https://stable-diffusion.fsociety.hu/
MIT License
7 stars 3 forks source link

Thank you for this great app. Could you make llama.cpp.gui.wx, please? #15

Open JohnClaw opened 7 months ago

JohnClaw commented 7 months ago

Most of GUI-s for llama.cpp are compiled in slow languages and their compiled binaries are huge in comparision to executables made by any C++ compiler.

fszontagh commented 7 months ago

@JohnClaw Thank you!

So, the main problem with the base libraries (ggml.c and sd.cpp) is the assertion handling. When an assertion occurs, then an abort is called at the end, which "kills" the entire app too. I checked out the app called jellybox, which is starting the llama and sd.cpp in separate processes. (I can't get it to work on my machine, unfortunately), which is a possibility to handle the abort() in the gui apps. But currently in sd.cpp.gui.wx it's impossible to handle this in a different way, so when an assertion occurs, the entire app will stop without any user warning. This is a huge step back, and un-user-friendly thing. And MS Store didn't allow applications which use the abort() method (but this is a good think).

Of course, I had thinked about the llama.cpp.gui.wx thing. Or just simply integrating the llama into the currently existing gui app. (I had an idea to use AI generated prompts with SD ). But currently I lost my interest in it, because the abort() thing in the backend libraries.

For the future, I will try to upgrade the sd.cpp to the latest commit (this will be a huge thing, because the lot of new features/commits, but some of these new features are really limited and hard to follow these limitations in the GUI)

In the future i will start do something with the llama, but not now yet.

JohnClaw commented 7 months ago

Of course, I had thinked about the llama.cpp.gui.wx thing. Or just simply integrating the llama into the currently existing gui app. (I had an idea to use AI generated prompts with SD ). But currently I lost my interest in it, because the abort() thing in the backend libraries.

Chinese coders managed to overcome this problem and made a small C++ GUI that combines features of llama.cpp and stable-diffusion.cpp: https://github.com/ylsdamxssjxxdd/eva/releases/download/b2636/eva-b2636-64bit.exe

Project's code: https://github.com/ylsdamxssjxxdd/eva?tab=readme-ov-file