yoavz / music_rnn

Music Language Modeling with Recurrent Neural Networks
http://yoavz.com/music_rnn
197 stars 38 forks source link

How to extract melody and harmony for training from multiple sound tracks #1

Closed wangdelp closed 7 years ago

wangdelp commented 8 years ago

Hi, I have some midi files (https://freemidi.org/artist-1870-beyond#) I want to train on, it has different tracks (piano, rock guitar, acoustic guitar...). Sounds like the input midi should only contain melody and harmony, how do I extract that information and feed the data for training? Thank you.

wzds2015 commented 8 years ago

Hi wangdelp,

Have you solved this issue? I haven't tried, but I am going to do the same thing. My understanding is we just need to extend the input api to make a longer sequence. For sure there are more codes need modifying, e.g every place like data[:, :, :r]. Correct me if this is not enough.

wangdelp commented 8 years ago

Sorry I did not try any further after that issue On Thu, Nov 24, 2016 at 5:40 PM Wenliang Zhao notifications@github.com wrote:

Hi wangdelp,

Have you solved this issue? I haven't tried, but I am going to do the same thing. My understanding is we just need to extend the input api to make a longer sequence. For sure there are more codes need modifying, e.g every place like data[:, :, :r]. Correct me if this is not enough.

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/yoavz/music_rnn/issues/1#issuecomment-262856298, or mute the thread https://github.com/notifications/unsubscribe-auth/ABptBaLquy0RBb0EzQfh2zO2EMQixAk-ks5rBiBxgaJpZM4Ieh-x .

yoavz commented 7 years ago

I only wrote melody and harmony extraction code for the dataset I was experimenting on (Nottingham), unfortunately. It is possible to extend this model to more tracks (have n softmaxes for n tracks), but you'll have to write the code to encode and decode each of the time step representations based on the dataset.