KoljaB / LocalAIVoiceChat

Local AI talk with a custom voice based on Zephyr 7B model. Uses RealtimeSTT with faster_whisper for transcription and RealtimeTTS with Coqui XTTS for synthesis.
Other
392 stars 38 forks source link

use existing llama.cpp install #9

Open scalar27 opened 3 months ago

scalar27 commented 3 months ago

I've been using llama.cpp for quite a while (M1 Mac). Is there a way I can get ai_voicetalk_local.py to point to that installation instead of reinstalling it here? Sorry, newbie question...

KoljaB commented 3 months ago

Just leave out step 2 of installation. I think coqui engine does not run in realtime on a Mac though.

scalar27 commented 3 months ago

I did leave out step 2 but then I get an error when I try to run: ModuleNotFoundError: No module named 'llama_cpp'

KoljaB commented 3 months ago

Python import of llama_cpp fails, that means your environment does not have working python bindings for your llama.cpp. Please look here for Mac bindings, probably Metal (MPS).

scalar27 commented 3 months ago

Thank you. I did get it to work following your comment. Like the other M1 person, I do get stuttering. It's a shame because the voice quality is excellent and the latency is rather short. Hope a future update might solve this for us!