Closed Freddd13 closed 1 year ago
Many thanks for asking! It seems a problem caused by the pickle
library.
I've re-uploaded the checkpoint to Google Drive (please check the link and fixed several lines. Please check the latest version.
A sample transcribed score and a generation based on that were provided in the data
folder.
@BetsyTang Thank you for your reply. I've tried the updated content and now I can run inference. However, my result is strange. It seems that velocity is somewhat unstable. The sample result you provide has a duration of 1'27 while mine has only 1'14. It's even worse on my midi. I believe there's something wrong. I don't know why. Would you mind taking another a look? Here is my result using the sample midi: processed_midi Here is my cmd:
python .\inference.py --ckpt_path .\models\checkpoint.pt --input_file C:\Users\Fred\Desktop\sample_generation.mid --output_file C:/Users/Fred/Desktop/sample_processed.mid --cuda_devices 0
Thank you very much.
Hi, thanks for the question. The input file should be the sample_transcribed_score.mid
instead of the sample_generation.mid
. The sample_generation.mid
is the generation of the model given the transcribed score.
The provided model has some difficulty in generalizing to other types of music, so it might perform not as well as it does on classical piano. Please check the paper for the dataset we used for training. I am working on a more generalized model currently. Hope it helps.
O I made a mistake... Yeah, I also think there's something related to the training dataset. In fact my own midi is an ending song of JP anime (something pop?). It's arranged for piano with some rock style following the original song and I think there're many rhythm
focused groove parts compared to the classical piano. I've upload my midi. Hope it helps.
My midi
BTW, this work is really interesting. I just had similar idea to do such things days ago. I majoy in robotics and just an amateur of piano playing and arranging, thus not familiar with dl-algorithms. But today I found your work, what a coincidence!
Thank you for your reply again. Hope you can bring us more awesome work.
Thank you for your interest, and looking forward to any talk/discussion in the future.
Hi, I'm very interested in your work. But I had problem running the inference. After installing torch 1.12 and other requirements, when I run:
python .\inference.py --ckpt_path .\models\epoch=191-step=196992.ckpt --input_file C:/Users/Fred/Desktop/環-cycle-2showtmp.mid --ouput_file C:/Users/Fred/Desktop/cycle_processed.midi
I got
Where is
bert_s2p_ioi_2
?Besides, I found there's no
hs
in the inference parser. It seems there's something similar in the training code, so I add a--hs
with a value of 128, is that right? Looking forward to your reply!