HackerPoet / Composer

Generates video game music using neural networks.
https://youtu.be/UWxfnNXlVy8
1.13k stars 188 forks source link

(suggestion) Composer for pop music #24

Open ghost opened 5 years ago

ghost commented 5 years ago

First of all, I want to thank you for sharing this code. It's really impressive.

I trained my model using around 60 melodies of pop music and I am able to produce several catchy melodies. There are still some random notes scattered around, but that can be solved with a little polishing and more song samples.

I'm wondering if you could make a pop music generator using the same principle. The song midis can be divided into three parts: verse, pre-chorus, and chorus, and then use another layer of autoencoder.

Model

For the structure, it can selected by the user on the live edit.

HackerPoet commented 5 years ago

This would be an excellent way to do it if the dataset conforms to this model. It makes a lot of assumptions such as all songs in the dataset having labeled sections as well as all sections being exactly 4 measures long. I'm not sure if a dataset like that exists, or if you're planning to spend a lot of human-hours labeling one, but I'd love to see the result! You may also want to add intro and outro to make 5 sections if that's an option.

ghost commented 5 years ago

Right now, I manually edit the midis myself. I mostly take them from internet and just remove the parts I don't need (e.g. the verse, pre chorus, bassline). There are plenty of Synthesia-like piano tutorials on YouTube. Perhaps you could write a simple code that can read the video and translate it into midi and since they're just falling blocks I think it'd be fairly easy. The tricky part is classfying the sections. I'm not sure if there are one right now, so I'm gonna do it manually for now.

Intro and outro are also good ideas but they're often just a simple chord or melody which you can think of based on the chorus. I think it'd be best to leave them out at first if you're manually classifying the sections to save time. But with the help of a classifier it'd be a great addition.

I'm a complete amateur in python and machine learning stuff but I'll see what I can do. Thanks!

ghost commented 5 years ago

@HackerPoet What does 'O' when running the live_edit? It seems like it's trying to output its recreation of the trained midi, but I've noticed that most of the output has bunch of mistakes, but some of them are near perfect that it's probably taken directly from the trained midi files. If it is indeed giving output of its recreation of the trained midi, then I might have a mistake on the dataset.

I trained my model on 8 bars midis and each midi ranges from 2-4 octaves. When writing the code, did you put any limitations on the midi? I watched that you use a 96x96 so I think 4 octaves is still acceptable