Bip-Rep / sherpa

A mobile Implementation of llama.cpp
MIT License
283 stars 33 forks source link

how to make it faster #6

Open scrawnyether5669 opened 1 year ago

scrawnyether5669 commented 1 year ago

i installed the latest version and its a cool app but it so slow I'm running the vicuna 7b is there a way to make faster i have an 8gb ram phone and what other models does support and please link me to them

dsd commented 1 year ago

I have a branch that moves more of the processing into native code, I believe it should bring a noticable performance improvement. You can also try 3B models with this version, which should also be much faster. Feel free to try. Note that the new llama.cpp changes model compatibility, models that used to work with Sherpa probably don't work any more until conversion. Pull request: https://github.com/Bip-Rep/sherpa/pull/12 apk available: https://github.com/dsd/sherpa/releases/tag/2.2.1-dsd2

realcarlos commented 1 year ago

I have a branch that moves more of the processing into native code, I believe it should bring a noticable performance improvement. You can also try 3B models with this version, which should also be much faster. Feel free to try. Note that the new llama.cpp changes model compatibility, models that used to work with Sherpa probably don't work any more until conversion. Pull request: #12 apk available: https://github.com/dsd/sherpa/releases/tag/2.2.1-dsd2

Hi dsd, it works with the apk you provided , but I failed to run it from your forked source. and when I run on my Mac , it shows "Library not loaded: @rpath/libllama.dylib"

dsd commented 1 year ago

It's my first time developing Android apps but feel free to share info about the failure to run from source and I will let you know if I have any ideas.

I did not do any work to retain Mac compatibility but I think this is what needs to be done: https://github.com/Bip-Rep/sherpa/pull/12#issuecomment-1621045871

suoko commented 11 months ago

I have a branch that moves more of the processing into native code, I believe it should bring a noticable performance improvement. You can also try 3B models with this version, which should also be much faster. Feel free to try. Note that the new llama.cpp changes model compatibility, models that used to work with Sherpa probably don't work any more until conversion. Pull request: #12 apk available: https://github.com/dsd/sherpa/releases/tag/2.2.1-dsd2

Is this app using both CPU and GPU of smartphones? Also, is there any chance to make it run with less RAM like 4gb?

dsd commented 11 months ago

llama.cpp is used as the backend, so you would need to check if llama.cpp supports your GPU, and if it is usable on 4GB RAM with the model you are interested in.

suoko commented 11 months ago

llama.cpp is used as the backend, so you would need to check if llama.cpp supports your GPU, and if it is usable on 4GB RAM with the model you are interested in.

Does it support any mobile gpu like mali or adreno?