music-x-lab / POP909-Dataset

This is the dataset repository for the paper: POP909: A Pop-song Dataset for Music Arrangement Generation
MIT License
277 stars 38 forks source link

Details on piano accompaniment generation conditioned on the melody task. #2

Closed blackpaintedman closed 4 years ago

blackpaintedman commented 4 years ago

Greetings.

Great job on the datasets! It's nice to see a high quality piano performance datasets coming out with Chinese pop song.

After reading through the paper, I'm curious about how POP909 and music transformer perform together. Tho I'm kind of a newbie to Tensorflow/magenta framework. The score2perf part of document/code looks pretty daunting to me.

Would you guys kindly release the code for experiment implementation(as described in 5.1 & 5.2 in the paper), or briefly demonstrate the process?

Any response appreciated!

RetroCirce commented 4 years ago

The whole experiement is settled in this scenario: Given a sequence from {x_1, x_2, x_3, .......,, x_n}, the transformer model is designed to predict the sequence {x_2, x_3, x4...,x{n+1}) The x is the token we define in section 5, like the music representation in magenta performance rnn/music transformer.

Since we use the mask (which is usually done in the transformer), the whole model, in one training, will do:

That is the model we define in 5.1, 5.2. In 5.2, we do another procedure that we detect the prediction token from the trained model, and we only accept the accompaniment token, and do other actions like dropping to deal with other tokens (detail in 5.2) . So the model will only generate the accomapaniment based on the melody we condition on.

If you want to implement the experiement or find what a specific transformer structure we use, you can refer to this by Jason. We included it in our paper reference, and that is the music transformer structure we refer to. I have to say this is difference from the google music tranformer strcuture (but google does not provide a availble music transformer self-training code).

We use the pytorch in the model training, you can also find a tensorflow-2 version in the link above, hope this will help you quickly get start with it.

blackpaintedman commented 4 years ago

Thanks alot for the demonstration! I'll definitely check them out and let you know if I got any significant result.

Nice work again!