Open snowmint opened 2 years ago
Hi, sorry for the late response.
In the evaluation script, I only put score outputs for audio2score
and joint
methods to the output folder. For multi-pitch detection (audio2pr
), I simply ran the test but didn't try to get the outputs.
I just added you to another repository that provides the inference script for audio2pr
, could you have a check if you have received the invitation?
For inferencing other audio files:
audio2pr
, you can use the predict.py script provided in the private repository I just sharedaudio2score
or joint
, I didn't add a separate script for that, but maybe you can gather code from relevant parts (transcribers.py, test.py...). Note that the model needs not only the audio as input but also the downbeat times. Since it can only inference one bar at a time.Hope it helps~ If there're other things unclear or errors popping out, please do get in touch :D
PS - happy new year!
After read your paper I have interest in how to achieve this work.
After training and testing the outputs folder are empty, the training processing end at epoch 56, then run the test.py and get the evaluation like below attached text, but the outputs folder are empty.
And also I wonder how to inference our own midi file and get the transcription music score output here.
Thank you for your time for reading my question.