This project (naively) implements sequence modelling for concatenative audio synthesis using RNNs (GRU), largely inspired by techniques such as CataRT.
For more information, refer to the Wiki.
After cloning this repo, you will have to install requirements:
pip install -r requirements.txt
For real-time generation and synthesis with any model:
python generate.py -p /Users/You/MC-FP/models/modelname
synth.maxproj
Although the demo is implemented in Max MSP, models trained using this code can be used in any audio software environment, as long as it supports JSON format and can communicate via OSC.
You can train your own model using train.py
. The training notebook is yet to be updated to reflect recent changes in the repo.
After that, to use it with Max, you will have to add the necessary files from the model folder to the Max project so their location is known to the patch. There are two ways to do this:
You will also have to add the model name to the items attribute of the umenu
object in the left corner of the patch.
To do so, you can do the following:
umenu
object in the patch, it's a drop-down menu in the top left corner and it looks like this
umenu
object, it should be now highlightedItems
header, find the Menu Items
attribute and add the name of your model to the list, separated by a comma