In this repository using single track method (2nd method in paper.).
If you want to get implementation of method 1, see here .
I refered preprocess code from performaceRNN re-built repository..
Preprocess implementation repository is here.
$ git clone https://github.com/jason9693/MusicTransformer-pytorch.git
$ cd MusicTransformer-pytorch
$ git clone https://github.com/jason9693/midi-neural-processor.git
$ mv midi-neural-processor midi_processor
$ sh dataset/script/{ecomp_piano_downloader, midi_world_downloader, ...}.sh
$ python preprocess.py {midi_load_dir} {dataset_save_dir}
$ python train.py -c {config yml file 1} {config yml file 2} ... -m {model_dir}
Baseline Transformer ( Green, Gray Color ) vs Music Transformer ( Blue, Red Color )
Loss
Accuracy
$ python generate.py -c {config yml file 1} {config yml file 2} -m {model_dir}