Open GeorvityLabs opened 1 year ago
Hello @GeorvityLabs Thank you very much for the request. I wish I could, but I'm afraid it is difficult to prepare a Google colab because I am busy with work. I appreciate your understanding.
Hello @GeorvityLabs Thank you very much for the request. I wish I could, but I'm afraid it is difficult to prepare a Google colab because I am busy with work. I appreciate your understanding.
If you could give some instructions, I can try to make one
Hello @GeorvityLabs Thank you very much for the request. I wish I could, but I'm afraid it is difficult to prepare a Google colab because I am busy with work. I appreciate your understanding.
If you could give some instructions, I can try to make one
Maybe once model checkpoint is released
I'm wondering if you have read the README.md (https://github.com/sony/hFT-Transformer/blob/master/README.md)? The evaluation step explains how we can transcribe audio. We've already shared the checkpoint, so I think you can try it by yourself.
there are a lot of steps in preprocessing audio files that are solely for evaluation (e.g. conv_note2label.py), not for taking a new audio file and generating MIDI.
It would be great if you could include a script that we could run starting with a new dataset.
Or, if it as simple as excluding 1 line of EXE-CORPUS-MAESTRO.sh, that would be great, too, if you could let us know. Just really hard to read through such a large code base and understand immediately what is going on, where
@GeorvityLabs Have you made one sample to load a .wav or .mp3 piano audio to get the midi output? Many thanks!
@KeisukeToyama great work! hope you could create a google colab inference notebook, where we could use the colab t4 gpu. you can give the option for the user to load a .wav or .mp3 piano audio from their computer and then get the midi output using hFT-Transformer.