slSeanWU / MusDr

Evaluation metrics for machine-composed symbolic music. Paper: "The Jazz Transformer on the Front Line: Exploring the Shortcomings of AI-Composed Music through Quantitative Measures", ISMIR 2020
MIT License
60 stars 7 forks source link

Generate symbolic music from audio/midi #4

Open wenshaowu opened 3 years ago

wenshaowu commented 3 years ago

Hello, I came across some error while trying to run all evaluation metrics on my own audios for "{myaudio}_remi.csv" did not exist. How exactly should "{myaudio}_remi.csv", the same symbolic format as in musdr/testdata, be generated?

slSeanWU commented 3 years ago

Hi! You have to convert the pieces to REMI encodings before running the metrics. This will be difficult if you only have audios; in such case, you'll need to do "transcription", "beat detection", and "chord recognition" to convert your songs into beat-aligned MIDI format, which can then be translated to REMI.

Thanks.

wenshaowu commented 3 years ago

Thank you for the reply! As I checked how the remi files of WJassD was produced, it seemed like mcsv_beat and mcsv_melody were necessary to obtain some remi information, e.g. Chord-Type, Chord-Slash, MLU, etc. But some unique information was given from the WJazzD dataset, such as phase_begin, ideas.... If I was doing right, I tried to generate beat and melody information from the repo: compound-word-transformer. It was unlikely to reproduce those data. Is there ways to reproduce mcsv_beat and mcsv_melody from midi file(if I have one)?