Open skewballfox opened 1 year ago
thanks for the positive feedback!
yep definitely would welcome that. I included npy output for filter banks, but just what's in the whisper project for their particular parameters (https://github.com/wavey-ai/mel-spec/blob/main/src/lib.rs#L29) - would welcome expanding these with more npy test data.
Ditto contributions. I have moved things around a bit, I'll be merging my PR tomorrow though and that will be the end of the initial churn, so feel free to PR anything after those changes are in and things are stable.
So I wrote some python code to generate the test data and place it in a nested directory structure (test can vary by either matrix or configuration input), but I haven't wrote a library for automatically generating the tests.
How much data do you want for each function you are trying to match(Mel spec and its dependencies), and how much manual extraction are you comfortable with? I have some ideas for a library for generating and running test, but I'm still trying to gauge whether it's a common enough problem to warrant making a library.
The current test that compares to python uses an npz archive of the py output and loads it into rust with ndarray_npy - I think that might be the way to go (or .npy so that the test fixtures have more visibility).
Awesome project. If I was able to generate data from librosa's python functions(input and output npy files) to compare against, would you be interested in including those as a test case?
I started working on a tool for generating files for this for the library I've been working on after realizing none of the existing python functions for computing an MFCC had anywhere near the same results, and models may be somewhat overfit to the preprocessing step. but I'm less of a mathematician and more of a code monkey, so I'm just guessing.
Also interested in contributors?