C++ library to apply similarity measures and classifications on the results of audio analysis, including Python bindings. Together with Essentia it can be used to compute high-level descriptions of music.
If I take some data files from music extractor and change the precision of the floating point numbers, e.g. from 15dp to 8dp, some numbers are so small already that they get rounded to 0.
When this happens, the following error periodically happens when training.
Is cleaner automatically applied to all items before passing through other filters, or should we define it explicitly in the project file?
ERROR ClassificationTask | While doing evaluation with param = {'kernel': 'RBF', 'C': -3, 'balanceClasses': False, 'preprocessing': 'normalized', 'type': 'C-SVC', 'classifier': 'svm', 'gamma': -7}
evaluation = [{'type': 'nfoldcrossvalidation', 'nfold': 5}]
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/gaia2/classification/classificationtask.py", line 204, in <module>
task.run(*config)
File "/usr/local/lib/python2.7/dist-packages/gaia2/classification/classificationtask.py", line 187, in run
confusion = evaluateNfold(evalparam['nfold'], ds, gt, trainerFun, seed=seed, **trainingparam)
File "/usr/local/lib/python2.7/dist-packages/gaia2/classification/evaluation.py", line 120, in evaluateNfold
classifier = trainingFunc(trainds, traingt, *args, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/gaia2/classification/classifier_SVM.py", line 36, in train_SVM
ds = transform(ds, 'normalize', { 'independent': True })
File "/usr/local/lib/python2.7/dist-packages/gaia2/__init__.py", line 6151, in transform
return analyzer.analyze(dataset).applyToDataSet(dataset)
File "/usr/local/lib/python2.7/dist-packages/gaia2/__init__.py", line 2912, in analyze
return _gaia2.Analyzer_analyze(self, *args)
Exception: Normalize: Apply "cleaner" transformation before normalization. Division by zero in .lowlevel.melbands.var
If I take some data files from music extractor and change the precision of the floating point numbers, e.g. from 15dp to 8dp, some numbers are so small already that they get rounded to 0. When this happens, the following error periodically happens when training. Is cleaner automatically applied to all items before passing through other filters, or should we define it explicitly in the project file?