fastaudio / fastai_audio

[DEPRECATED] 🔊️ Audio with fastaiv1
MIT License
160 stars 49 forks source link

Inference #54

Closed srevay closed 4 years ago

srevay commented 4 years ago

I know that it’s in the notes that inference doesn’t work in all cases, but is there a reference anywhere about how to even test that? That is, how would you generate the spectrogram for just one audio file and then run it through the learn.load module ?

tbass134 commented 4 years ago

Here's a function I wrote that accepts a path to a wav file, generates the spectorgram, and performs inference

from IPython.display import Audio
config = AudioConfig()
config.duration = 4000
config.resample_to = 16000
config.cache = False

def predict_from_file(wav_file, leaner, verbose=True):  
    item = AudioItem(path=wav_file)
    if verbose: display(item)
    al = AudioList([item], path=item.path, config=config)
    ai = AudioList.open(al, item.path)
    y, pred, raw_pred = leaner.predict(ai)
    if verbose: print(y)
    if verbose: print(pred.item())
    if verbose: print(raw_pred)

I create a new AudioConfig, but you should be able to use the same config from the learner

before calling this, you'll need to export the learner like so:

export_learn = load_learner({path_to_learner})

then call the function:

wav_file = {path_to_wav}
predict_from_file(wav_file, export_learn)

This will print out the predictions from the given file

mogwai commented 4 years ago

@tbass134 Would you consider creating a PR adding this code in for other people to use?

@srevay Did this solve you problem?

srevay commented 4 years ago

yes thank you @tbass134 !

sirius0503 commented 4 years ago

@mogwai : I am new to pytorch and wanted to know how I can load the model after saving it ( I did have a look at the pytorch docs, but I don't know how to get the model architecture here) like you did using learner.save() #function and then do the inference from the saved model, so as to use it separately from the training part. Thanks