Jakobovski / free-spoken-digit-dataset

A free audio dataset of spoken digits. An audio version of MNIST.
622 stars 251 forks source link

Normalize the recordings to have the same number of channels #10

Closed cesarsouza closed 7 years ago

cesarsouza commented 7 years ago

It seems that the recordings done by Jackson have only 1 audio channel (mono), but the recordings by Nicolas have 2 audio channels (stereo). I've noticed that there are no guidelines regarding how many channels the recordings should have in the main README.md at the front page of this project. As such, I would like to know whether the samples from this dataset can have samples with different audio channels, or whether there are plans to normalize the samples such that they all have just one channel. In either case, I suppose the contribution guidelines could be extended with this information.

Regards, Cesar

Jakobovski commented 7 years ago

I don't see any reason to have stereo samples. I can't imagine stereo being useful for anyone using this dataset. Therefore I think it best for all audio samples to be made mono.

Currently, there are no plans to normalize the samples. If you would like to do it, and update the README I would be happy to merge.

BTW what are you using this dataset for?

cesarsouza commented 7 years ago

Hi @Jakobovski, thanks for the answer! I am not sure I will have the time to normalize the samples either, but I could try to update the README with instructions for the next contributors.

Regarding the dataset use, I am currently using the dataset to create an example about how to do audio classification using Bag-of-Audio-Words and MFCC features for the Accord.NET Framework. The interface I had mentioned on issue #9 will be included as part of the project to enable users to download the dataset, and learn and test classification models on it, quickly.

By the way, do you have any baseline numbers of test set accuracy that I could compare against?

Regards, Cesar

Jakobovski commented 7 years ago

Nice. It has been merged.

No, I dont have a baseline. Although it did some work with FSDD in this repo https://github.com/Jakobovski/decoupled-multimodal-learning

cesarsouza commented 7 years ago

Hi @Jakobovski,

Just to give a small update, I've just made a new release of the Accord.NET Framework adding the Free Spoken Digits Dataset to the list of datasets that can be downloaded and interacted with from C#. The documentation page for the Accord.DataSets.FreeSpokenDigitsDataset class can be found here: http://accord-framework.net/docs/html/T_Accord_DataSets_FreeSpokenDigitsDataset.htm.

I've also added an example showing how to use the FSDD to learn models for audio classification through BoAW and MFCC (bottom of the page). Just for the information of whoever could be interested, without optimizing hyper-parameters, I've been able to achieve 0.97 training error and 0.86 testing error (on the provided training and testing sets specified by the FSDD).

I am sure those numbers could be improved easily through better hyper-parameter search, by using better normalization options, or by using a better representation (such as the FV instead of BoW). Or, of course, by using deep learning models instead of plain SVMs.

Regards, Cesar

Jakobovski commented 7 years ago

Very cool. If you want you can add Accord.NET to the FSDD README

cesarsouza commented 7 years ago

Thanks @Jakobovski, since you allowed, I've just did it with #13