-
Did you take it out permanently?
-
Hello!
I've been using the WhisperX large-v2 model in English on a project to transcribe vocals taken from songs, which I derive using source separation with spleeter. If it matters, I've been runn…
-
***BEFORE POSTING A BUG REPORT*** Please look through [existing issues (both open and closed)](https://github.com/nussl/nussl/issues?q=is%3Aissue) to see if it's already been reported or fixed!
*…
-
Very great work! The idea is very interesting and thank you for providing the codes.
After running the script `download_models.sh`, I found out that there are several pretrained models in the folde…
-
I did a spleeter pre-trained model split for some Hindustani classical music recording. Unfortunately the source separation was too aggressive and the audio was suppressed in many cases but there …
-
The real-time interpretation of underwater sounds can be improved by applying OSS machine learning and time-series techniques to streams of audio and, where available, the real-time output of other en…
-
From your paper, I wasn't sure of the role/purpose of music_speech_audioset_epoch_15_esc_89.98.pt
Are these the saved model weights one should use if one wants to focus on separation of musical ins…
-
At long last, I finally have something that kind of, sort of, works.
Current status with my Oticon More 2's:
- HA's can be discovered and paired
- USB audio works (at least on Linux, not tested…
-
It would be great to get support for Real, Imag, and Complex. Anybody working with audio data has to deal with complex inputs and this is very difficult without support for these ops.
-
I am testing ODAS for possible use in a computer interactive art piece to isolate participants and performers giving voice commands from the ambient noise/music of the active space.
In my testing,…