Open adjei7 opened 1 month ago
3. ollama = Ollama(model="model_name")
Use ollama as llm property in ConversationChain
Hi @adjei7, Thanks for your comment, here is some advices:
poetry
instead, this project was built and test under poetry envThank you @vndee.
One more question. I have installed this on a raspberry pi (5 with 8GB RAM). The final step of the process (TTS) seems to take a very long time, nearly 2 hours when I tested. I know this was not intended for a pi, but the other stages (STT and LLM answer generation) take only seconds. Is there an alternative TTS engine to Bark that you would recommend? I'm quite new to this, so am unfamiliar with the different platforms. Thanks :)
Hey @adjei7 You could try this: https://github.com/huggingface/parler-tts
First of all, what you have done here is awesome. Thank you. This is by far one of the most simple ways to run ollama with whisper that I have seen. But I have run into a few issues.
When installing the dependencies via requirements.txt, pyaudio had issues installing, due to portaudio not present. Installing portaudio separately resolved the issue. Is it possible to include that in the requirements.txt file?
when running app.py, it seems like punkt is missing. Installing it seemed to sort it out.
Is there a way to have it install using a different default model instead of llama2. Something more lightweight like phi2? I'm trying to run this on a pi 5.