Open scrawnyether5669 opened 1 year ago
I have a branch that moves more of the processing into native code, I believe it should bring a noticable performance improvement. You can also try 3B models with this version, which should also be much faster. Feel free to try. Note that the new llama.cpp changes model compatibility, models that used to work with Sherpa probably don't work any more until conversion. Pull request: https://github.com/Bip-Rep/sherpa/pull/12 apk available: https://github.com/dsd/sherpa/releases/tag/2.2.1-dsd2
I have a branch that moves more of the processing into native code, I believe it should bring a noticable performance improvement. You can also try 3B models with this version, which should also be much faster. Feel free to try. Note that the new llama.cpp changes model compatibility, models that used to work with Sherpa probably don't work any more until conversion. Pull request: #12 apk available: https://github.com/dsd/sherpa/releases/tag/2.2.1-dsd2
Hi dsd, it works with the apk you provided , but I failed to run it from your forked source. and when I run on my Mac , it shows "Library not loaded: @rpath/libllama.dylib"
It's my first time developing Android apps but feel free to share info about the failure to run from source and I will let you know if I have any ideas.
I did not do any work to retain Mac compatibility but I think this is what needs to be done: https://github.com/Bip-Rep/sherpa/pull/12#issuecomment-1621045871
I have a branch that moves more of the processing into native code, I believe it should bring a noticable performance improvement. You can also try 3B models with this version, which should also be much faster. Feel free to try. Note that the new llama.cpp changes model compatibility, models that used to work with Sherpa probably don't work any more until conversion. Pull request: #12 apk available: https://github.com/dsd/sherpa/releases/tag/2.2.1-dsd2
Is this app using both CPU and GPU of smartphones? Also, is there any chance to make it run with less RAM like 4gb?
llama.cpp is used as the backend, so you would need to check if llama.cpp supports your GPU, and if it is usable on 4GB RAM with the model you are interested in.
llama.cpp is used as the backend, so you would need to check if llama.cpp supports your GPU, and if it is usable on 4GB RAM with the model you are interested in.
Does it support any mobile gpu like mali or adreno?
i installed the latest version and its a cool app but it so slow I'm running the vicuna 7b is there a way to make faster i have an 8gb ram phone and what other models does support and please link me to them