tuwien-musicir / rp_extract

Rhythm Pattern music feature extractor by IFS @ TU-Vienna
GNU General Public License v3.0
110 stars 27 forks source link

Perform feature extraction on a descriptor file instead of an audio file #26

Closed NaturalFigurehead closed 4 years ago

NaturalFigurehead commented 5 years ago

Is there a way to train a model, then perform feature extraction on a descriptor file of an audio file instead of the actual audio file itself?

My problem is that users will submit their audio files to me for analysis. I analyse them and send back the results. Then I delete the audio file because it's not mine and I can't store it. But if I can export some sort of descriptor file for that audio file, then later I'd like to be able to train a new model and extract the features for that descriptor.

Is something like this possible with this library?

slychief commented 5 years ago

Hi,

I'm not quite shure if I understand your question. When you say you are analysing tracks, do you mean, that you perform a classification or detection task such as genre classification or mood prediction?

Which modules are you using from this repository?

audiofeature commented 5 years ago

Hi Oliver,

Our library allows to perform those 2 steps:

1) feature extraction (using rp_extract.py or rp_extract.batch.py)

2) classification (using rp_classify.py)

Yes you can store the results of step 1 to do or redo classification (step 2) later. rp_extract.batch.py will store those features in a CSV or HDF5 file you could also build a wrapper around rp_extract.py to store the feature in a database.

Am 08.08.2018 um 14:16 schrieb Oliver Reznik notifications@github.com:

Is there a way to train a model, then perform feature extraction on a descriptor file of an audio file instead of the actual audio file itself?

My problem is that users will submit their audio files to me for analysis. I analyse them and send back the results. Then I delete the audio file because it's not mine and I can't store it. But if I can export some sort of descriptor file for that audio file, then later I'd like to be able to train a new model and extract the features for that descriptor.

Is something like this possible with this library?

— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/tuwien-musicir/rp_extract/issues/26, or mute the thread https://github.com/notifications/unsubscribe-auth/ALHE6RtLKjJ--ggBLhV_AsQ394hrCnslks5uOtaEgaJpZM4Vz0gE.

-- Thomas Lidy TU Wien - Vienna University of Technology Institute of Software Technology and Interactive Systems Favoritenstraße 9-11/188 A-1040 Vienna, Austria

http://www.ifs.tuwien.ac.at/~lidy http://www.ifs.tuwien.ac.at/~lidy

NaturalFigurehead commented 5 years ago

@audiofeature cool that's what I was looking for. Looks like this uses the classify function? Where do I get the model object to pass into it? And then for feature I just pass a path to one of the three feature files?

I'd read through this tutorial (http://nbviewer.ipython.org/github/tuwien-musicir/rp_extract/blob/master/RP_extract_Tutorial.ipynb) but it seems to not be working.

audiofeature commented 5 years ago

I have pushed a .v4 file of the tutorial, in case a read error with newer Jupyter version was the problem.

You can do those steps

1) Feature extraction:

./rp_extract_batch.sh

It will extract the default features RP, SSD, RH and create 3 output files with those extensions. (use other parameters to get other feature types)

alternatively from rp_extract import rp_extract into your code

2) Classification

a) train your own model: (refer to README.md -> "Train a model“):

./rp_classify.py --classfile

will analyze the audio files like in step 1 then create a classifier model with SVM classifier classfile: you have to provide a tab-separted file where each input filename is listed with relative path and a tab + a label of the genre category (string)

b) make predictions / classifications:

./rp_classify.py

individual audio file or folder your own model file if you created one in step 2.a; if omitted, a pretrained model from folder models/GTZAN.* will be used (this one was generated however with sklearn 0.17 so you’d have to downgrade to that version to use it); file to write the predictions to; if omitted, will print on screen best Thomas > Am 10.08.2018 um 01:12 schrieb Oliver Reznik : > > @audiofeature cool that's what I was looking for. Looks like this uses the classify function? Where do I get the model object to pass into it? And then for feature I just pass a path to one of the three feature files? > > I'd read through this tutorial (http://nbviewer.ipython.org/github/tuwien-musicir/rp_extract/blob/master/RP_extract_Tutorial.ipynb ) but it seems to not be working. > > — > You are receiving this because you were mentioned. > Reply to this email directly, view it on GitHub , or mute the thread . > -- Thomas Lidy TU Wien - Vienna University of Technology Institute of Software Technology and Interactive Systems Favoritenstraße 9-11/188 A-1040 Vienna, Austria http://www.ifs.tuwien.ac.at/~lidy