Closed Bikram9035 closed 3 months ago
Hey, are you setting OPENAI_API_KEY and OPENAI_API_BASE environment variables?
yeah I did setup openaikey in .env file but don't know what to write in OPENAI_API_BASE URL , and from where do i find that link from the official openai website please lemme know.
Please provide correct link it shows 404 error page not found.
apart from that now I tried by myself here the code. load_dotenv(".env") os.environ['OPENAI_API_KEY'] = os.getenv('OPENAI_API_KEY')
client = OpenAI(api_key=os.getenv("OPENAI_API_KEY"), base_url="https://api.openai.com/v1/audio/transcriptions")
audio_file = open("voice.m4a", "rb") transcript = client.audio.transcriptions.create( model="Systran/faster-distil-whisper-large-v3", file=audio_file ) print(transcript.text)
but now the error is different here's what it shows:
raise self._make_status_error_from_response(err.response) from None openai.NotFoundError: Error code: 404 - {'error': {'message': 'Invalid URL (POST /v1/audio/transcriptions/audio/transcriptions)', 'type': 'invalid_request_error', 'param': None, 'code': None}}
Please help me this is new to me. Hope to hear from u soon.
thank you
You'll need to spin up faster-whisper-server
using Docker, and your base_url
would be http://localhost:8000/v1/
. In your code, you are using the OpenAI's URL.
Hey Fedirz, Is it also compatible with openai SDK for Nodejs? And if so, in your example you used API key as a string that implied not leaving it empty. Does it mean I just have to put anything so long as it's a string probably to avoid an error?
Hey Fedirz, Is it also compatible with openai SDK for Nodejs? And if so, in your example you used API key as a string that implied not leaving it empty. Does it mean I just have to put anything so long as it's a string probably to avoid an error?
Yes
Hi @Bikram9035 and @fedirz,
I have implemented a solution for using custom models with the faster-whisper-server. Here’s a brief overview of my setup and workflow:
Frontend:
Backend:
ffmpeg
, and split into chunks using a Python script.You can find the full implementation and detailed instructions in my repository here.
For a simpler example that connects directly to the faster-whisper-server via WebSocket, you can refer to my whisper-html
project. This project demonstrates a straightforward way to send audio data and receive transcriptions using only HTML and JavaScript.
You can find the whisper-html
project here.
Feel free to reach out if you have any questions or need further assistance.
Best regards,
[Gan-Xing]
hello fedirz,
audio_file= open("recorded_audio.wav", "rb") transcription = gpt.audio.transcriptions.create( model="Systran/faster-distil-whisper-large-v3", file=audio_file ) tts= transcription.text
am getting error when using the model name with offical openai whisper api raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'error': {'message': 'invalid model ID', 'type': 'invalid_request_error', 'param': None, 'code': None}}
am new to this api and things so please help because there lot of overwhelming miss information online but am super excited to make this work. Hope u will understand my situation
os:windows. vscode,python:3.12.2
Thank you.