Hello, and first of all, thank you for your work. I have read the final version of the paper and have been playing around with the model. Since this repository contains code for unconditional generation, could you provide the code or some steps for the conditional model used for benchmarking in the paper? I am trying to get some objective metrics and there are not that many useful metrics for music besides FAD, although I would also appreciate any input on that matter. Harmonic and percussive characteristics are not that useful and other metrics (f.e. on the unconditionally generated samples) like the ones given from the muspy library work better for models focused on symbolic music generation. Thank you in advance.
Hello, and first of all, thank you for your work. I have read the final version of the paper and have been playing around with the model. Since this repository contains code for unconditional generation, could you provide the code or some steps for the conditional model used for benchmarking in the paper? I am trying to get some objective metrics and there are not that many useful metrics for music besides FAD, although I would also appreciate any input on that matter. Harmonic and percussive characteristics are not that useful and other metrics (f.e. on the unconditionally generated samples) like the ones given from the muspy library work better for models focused on symbolic music generation. Thank you in advance.