-
Hey @shashikg great repo and cheers to the insane efforts in building this repo.
I have a finetuned whisper model (both in original openai and HF formats ) which I want to use in TensorRT backend …
-
What are the main differences in large-v1, v2 and v3 models? They all seem to be nearly the same exact size so I am curious how I can see what the differences are?
-
### Confirm this is an issue with the Python library and not an underlying OpenAI API
- [X] This is an issue with the Python library
### Describe the bug
I'm making a transcription of audio using t…
-
While trying to finetune the openai/whisper-medium model with the google/fleurs dataset, even only using one language (greek) I very soon run out of VRAM, on a 20GB VRAM GPU.
Is there some way to r…
-
Here's what I get
```
Test Case '-[WhisperKitTests.FunctionalTests testRealTimeFactorLarge]' started.
/Users/jrp/Documents/AI/whisperkit/Sources/WhisperKit/Core/Models.swift:34: error: -[WhisperK…
-
Whisper may hallucinate text when an audio chunk is silence or noise (see https://github.com/elixir-nx/bumblebee/issues/377#issuecomment-2208521942). The openai-whisper implementation has `no_speech_t…
-
Whisper is on-premise GPU-based general-purpose multi-task model that can perform multilingual speech recognition as well as speech translation and language identification.
https://github.com/opena…
-
### 🥰 Feature Description
Please add whisper 3 in lobechat for better TTS service.
Try to use Huggingface space api (https://huggingface.co/spaces/openai/whisper),( https://huggingface.co/spaces/hf-…
-
Currently it is only possible to get segment timestamps but no other.
It is possible to set other granularities in the client and in the api, but it looks like the form data request from the client i…
-
**Description**
Add Azure OpenAI as a supported inference provider
**Tasks**
- [ ] Implement inference provider[ following these instructions](https://i-am-bee.github.io/bee-agent-framework/#/ll…