-
Hi, when I run npm run start I get:
`sh: 1: react-scripts: not found`
Then I run npm i, I get this:
```
angel@PCLX:~/Descargas/whisper-playground-main/interface$ npm i
Debugger attached.
…
-
QuickChat -- UI affordance for quickly 'saying' (yelling, whispering, action-ing) stock phrases. Primary use case is mobile without a keyboard.
Ideas:
Tap button, show fan of at most N items. …
-
Hi!
I have error:
Error transcribing file: DecodingOptions.__init__() got an unexpected keyword argument 'progress_callback'
when use --disable_faster_whisper=true with all models.
Without --di…
-
I want to add new model as ASR_MODEL=base ,but I try to download the Belle-distilwhisper-large-v2 to the Model Path.But it can't run rightly ?
-
Expose the recognizer via the SpeechRecognizer API, so that other apps could use it directly, not only via the IME API, e.g. I'd like to use it via Kõnele
(http://kaljurand.github.io/K6nele/about/).
…
-
There's a fantastic demo here https://github.com/ggerganov/whisper.cpp/tree/master/examples/talk-llama. I think a good UI could be putting a button in the text input like iOS messages. Hitting that wo…
-
I am using, Eleven labs, Text Gen UI and it's API, Whisper Live for transcription, but it seems to constantly listening to my microphone and it never send the messages:
```
while True:
user_me…
-
Speaker - Maria_Kasper
Text - "Emoti Voice is a powerful and modern open-source text-to-speech engine. Emoti Voice speaks both English and Chinese, and with over two thousand different voices. The mo…
-
I have 2 question:
1、 Whitch path should the whisper_stt model file be placed?
2、In huggingface there is many files (ex:openai/whisper-tiny/ there are 4 models .msgpack .safetensor .bi…
-
https://github.com/mediar-ai/screenpipe/blob/7922064dda8c1882a3e52da0c30430fe4d641170/screenpipe-app-tauri/src-tauri/src/llm_sidecar.rs#L62
ollama on windows needs `OLLAMA_ORIGINS=*` because its CO…