Closed Akai-ai closed 1 year ago
Hi! All the features we have implemented are unit tested.
You can convert MFCCs back to the waveform audio signal using the librosa.feature.inverse.mfcc_to_audio
. As a naive approach to measuring the reconstruction error, I guess you can calculate the RMSE and/or MAE between the original and reconstructed waveforms.
When I extract the MFCC and LPCC feature matrix, how can I tell that there is no problem with my extraction? Is it possible to reverse the reconstruction to calculate their relative errors?