Closed fakhirali closed 8 months ago
There should be a bash or script that installs stuff like llama-cpp with the correct flags etc. The media folder is very large, takes a lot of time to git clone. The link to onnxruntime is wrong should be https://onnxruntime.ai/docs/install/ Also add vosk to reqs There is no test.wav Has to download llama and piper models (https://huggingface.co/rhasspy/piper-voices/tree/v1.0.0/en/en_US) (https://huggingface.co/TheBloke/Llama-2-7B-GGUF) Ideally they get downloaded by themselves.
Also add a print to show listening to speech at here
Onnx-runtime link fixed vosk added to reqs test.wav replaced with my_voice.wav print statement added auto model downloading being worked on in a branch Bash script should be in pip
There should be a bash or script that installs stuff like llama-cpp with the correct flags etc. The media folder is very large, takes a lot of time to git clone. The link to onnxruntime is wrong should be https://onnxruntime.ai/docs/install/ Also add vosk to reqs There is no test.wav Has to download llama and piper models (https://huggingface.co/rhasspy/piper-voices/tree/v1.0.0/en/en_US) (https://huggingface.co/TheBloke/Llama-2-7B-GGUF) Ideally they get downloaded by themselves.