cheriell / ICASSP2021-A2S

accompanying code for my ICASSP2021 paper
17 stars 3 forks source link

After traing and testing the outputs folder are empty, and how to inference with our own audio? #3

Open snowmint opened 2 years ago

snowmint commented 2 years ago

After read your paper I have interest in how to achieve this work.

After training and testing the outputs folder are empty, the training processing end at epoch 56, then run the test.py and get the evaluation like below attached text, but the outputs folder are empty.

And also I wonder how to inference our own midi file and get the transcription music score output here.

(base) ilc@ilc:~/Desktop/workplace/ICASSP2021-A2S$ python test.py audio2pr --dataset_folder ./MuseSyn --feature_folder ./MuseSyn/features --model_checkpoint ./tensorboard_logs/audio2pr-VQT-bins_per_octave=60-n_octaves=8-gamma=20/version_0/checkpoints/epoch=56-valid_loss=43.3211.ckpt Get train metadata, 4 pianos Get valid metadata, 4 pianos Get test metadata, 4 pianos GPU available: True, used: True TPU available: None, using: 0 TPU cores Preparing spectrogram 672/672 Preparing pianoroll 672/672 Preparing spectrogram 80/80 Preparing pianoroll 80/80 Preparing spectrogram 84/84 Preparing pianoroll 84/84 The following callbacks returned in LightningModule.configure_callbacks will override existing callbacks passed to Trainer: ModelCheckpoint LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0] Get test dataloader Testing: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████| 458/458 [07:21<00:00, 1.04it/s]

DATALOADER:0 TEST RESULTS {'logs': {'test_accuracy': 0.7291517294943333, 'test_epoch': 0, 'test_f-score': 0.8169069737195969, 'test_f-score_n_on': 0.6852234803499391, 'test_f-score_n_onoff': 0.4228041814737893, 'test_loss': 89.91925048828125, 'test_precision': 0.9348128437995911, 'test_precision_n_on': 0.8518075491759702, 'test_precision_n_onoff': 0.5432599993745504, 'test_recall': 0.766946617513895, 'test_recall_n_on': 0.631439393939394, 'test_recall_n_onoff': 0.3829577285459639}, 'loss': 89.91925048828125, 'test_accuracy': 0.8398997187614441, 'test_epoch': 0.0, 'test_f-score': 0.9031959772109985, 'test_f-score_n_on': 0.8432270884513855, 'test_f-score_n_onoff': 0.664776623249054, 'test_loss': 45.26031494140625, 'test_precision': 0.9279804229736328, 'test_precision_n_on': 0.920455813407898, 'test_precision_n_onoff': 0.7133415937423706, 'test_recall': 0.8941190838813782, 'test_recall_n_on': 0.8031598329544067, 'test_recall_n_onoff': 0.6364219784736633}


Thank you for your time for reading my question.

cheriell commented 2 years ago

Hi, sorry for the late response.

In the evaluation script, I only put score outputs for audio2score and jointmethods to the output folder. For multi-pitch detection (audio2pr ), I simply ran the test but didn't try to get the outputs.

I just added you to another repository that provides the inference script for audio2pr, could you have a check if you have received the invitation?

For inferencing other audio files:

Hope it helps~ If there're other things unclear or errors popping out, please do get in touch :D

PS - happy new year!