Closed prasadchandrasekaran closed 2 months ago
what command parameters have you added while running run_server.py
I am able to do it on CPU successfully and here's what I use:
python run_server.py --port 9090 --backend faster_whisper
and this is what my client request looks like:
from whisper_live.client import TranscriptionClient
client = TranscriptionClient(
host="localhost",
port=9090,
# lang="hi",
translate=False,
model="tiny",
use_vad=True,
save_output_recording=False # Set to True if you want to save audio
)
# To transcribe from a microphone:
client()
I am using real time microphone transcription, hence the use_vad is set to True
Closed due to inactivity. Feel free to re-open if needed.
I'm running the CPU version of Whisper Live, and here's the client code I'm using:
from whisper_live.client import TranscriptionClient client = TranscriptionClient( "localhost", 9090, lang="en", translate=False # Only used for microphone input ) client()
However, I'm encountering a connection failure with the following error:
Client Error: python3 client.py [INFO]: * recording [INFO]: Waiting for server ready ... False en transcribe [INFO]: Opened connection [INFO]: Websocket connection closed: 1000:
Server Error: INFO:root:Single model mode currently only works with custom models. INFO:websockets.server:connection open INFO:root:New client connected ERROR:root:Error during new connection initialization: 'model'