-
Hi. I followed the set up in the readme and run `bash omni_speech/infer/run.sh omni_speech/infer/examples` but encounter this error
```
Traceback (most recent call last):
File "/home/rczheng/LLaM…
-
Hi,
I was reading the paper Small-E: Small Language Model with Linear Attention for Efficient Speech Synthesis and wanted to listen to the demo audio. However, when I followed the link provided in …
-
### Discussed in https://github.com/langchain-ai/langchain/discussions/27404
Originally posted by **kodychik** October 16, 2024
### Checked
- [X] I searched existing ideas and did not find …
-
Hello,
I'm a blind person using Orca in order to use Linux. I'd like to use SpeechNote for work, in order to create audio files of material for others to use.
Blind people use the keyboard to a…
-
# Feature Request
## Is your feature request related to a problem? Please describe.
The new Eleven Labs API supports now optional language parameter for model Turbo V2.5
https://elevenlabs.io/docs/ap…
-
### Ticket Contents
Develop connectors for AWS and GCP speech & translation connectors.
### Goals
To provide support for the speech and translations of data for cloud providers such as GCP an…
-
:red_circle: **Title**: Speech Recognition model
:red_circle: **Aim**: Integrate speech recognition in JARVIS using Python libraries to enhance voice command functionality for seamless
…
-
Original Repository: https://github.com/ml-explore/mlx-examples/
Listing out examples from there which would be nice to have. We don't expect the models to work out the moment they are translated to …
-
from RealtimeSTT import AudioToTextRecorder
import pyperclip
def process_text(text):
pyperclip.copy(text)
if __name__ == '__main__':
print("Wait until it says 'speak now'")
r…
-
Hello! Could you please add SALMONN series models?
Title | Venue | Date | Code | Demo
-- | -- | -- | -- | --
[SALMONN: Towards Generic Hearing Abilities for Large Language Models](https://arxiv.o…