-
The current method of preprocessing the music data is to split it into equal-time chunks, convert into an array of dB at each frequency at each time step (~1024 x 44 array, where the x axis is frequen…
-
I am an EWI player (electronic wind instrument) - for us it is very important to map CC#2 to parameters in synths (i. e. to a filter cutoff, to the wavetableposition of a generator, etc. in order to c…
-
During development it's super useful to do something like this:
```clojure
(trace "database state" conn port host)
```
and get back:
```js
console.group("database state")
console.log("conn"…
-
> Each PR can add ONE sound effect (audio file shorter than 10 seconds) and ONE music track (audio file longer than 10 seconds, maximum 10 minutes)
What if one feature (like item) would require a n…
-
Colouring the audio waveform by using the spectral centroid of a sequence of short-interval samples to map frequencies to colors can enhance content navigation and aid in understanding how the timbre …
-
Hi, why not add speaker classification in speaker encoder, or use Speaker Verification feature. If I only use a speaker encoder, will there be any problems with timbral coupling?
-
Let AI support the recognition of timbre, and then sing in the recognition of song score
-
Should allow for per-voice control of
- Pitch
- Timbre
- Pressure
- Volume
- Pan
Compatible hosts (at the moment) would be Cubase and Bitwig.
-
Hi!
I've encountered a problem
I have multi speaker dataset.
If I train a separate model for speaker (single speaker model) - prosody, speed, intonations, timbre, identity are good (for the spe…
-
I have a question about the Timbre Tranfer colab, specifically `audio_features = ddsp.training.metrics.compute_audio_features(audio)`
I did not see any API docs on this, hence asking here.
1. …