wazenmai / MIDI-BERT

This is the official repository for the paper, MidiBERT-Piano: Large-scale Pre-training for Symbolic Music Understanding.
MIT License
181 stars 23 forks source link

Inference #2

Closed joanroig closed 3 years ago

joanroig commented 3 years ago

Hello, I wanted to ask if there is a way to use the project for generating new pieces like in emopia or in the compound-word-transformer project, and if it there is any source code available. Thanks

sophia1488 commented 3 years ago

Hi, we currently don't support music generation. But since the model tries to reconstruct masked tokens during pre-training, we can think of reconstruction as a generation. The most similar research to this is FELIX (https://arxiv.org/abs/2003.10687) by Google, and there are projects related to that, which masks around 75 tokens and lets the model predict masked ones.

Also, some other interesting projects about music generation: https://github.com/YatingMusic/MuseMorphose Thank you! 😀

joanroig commented 3 years ago

@sophia1488 Thanks for the answer and all the links Sophia, amazing work :)