Open TobyRoseman opened 4 years ago
It appears that CoreML's SoundAnalysisPreprocessing
is really built to be used as an internal part of a Sound Classifier model generated by the CreateML app. The parameters (input length, window size, step size, etc) are not configurable at all.
Out of curiosity, is SoundAnalysisPreprocessing
built off of TCSoundClassifierPreprocessing
? The hardcoded parameters all seem the exact same, so is it actually the exact same custom layer that's now an official CoreML layer type?
Currently the exported Core ML model for a sound classifier uses a custom model to do the preprocessing. Core ML now has this functionality built in and we should use that. However we probably also want to keep the currently functionality as well so we can support Core ML version 3.
In order to support we need to be able to use the current version of Core ML (#2860). We will also need to update our User Guide.