-
A comparison was made of the MFCC computation between librosa and essentia, using data from [DCASE challenge 2016](http://www.cs.tut.fi/sgn/arg/dcase2016/task-acoustic-scene-classification), using the…
-
from __future__ import print_function
from hyperparams import Hyperparams as hp
import numpy as np
from data_load import load_data
import tensorflow as tf
from train import Graph
from utils im…
-
Hi James,
I found this repository through your blog posts on LPC implementations in Julia. We are working on a library for DSP in Julia (JuliaDSP/DSP.jl) and I think these functions would be an inter…
-
Is there any function which creates the mel-scaled spectrogram? There is a functionality in librosa librosa.feature.melspectrogram(), is there any similar method in .js?
Thank you
-
Hi,
Does anyone know how we can use wave-net implementation to actually return the speaker id on giving wave file as input? Instead of generating the wave file for a given speaker.
Thanks,
Arpi…
-
LOG
select -r POLYWINK_Louise ;
import auto_lip_sync
auto_lip_sync.start()
b'Setting up corpus information...\r\n'
b'Number of speakers in corpus: 1, average number of utterances per speaker: 1…
-
Get an error for 8Khz Wav file when I run this simple example. Works for 16Khz recordings.
`from pyAudioAnalysis import audioBasicIO as aIO
from pyAudioAnalysis import audioSegmentation as aS
[Fs…
-
commit f2945963d
trying it with a valid file, works for the FluidNoveltySlice but then aborts:
> Traceback (most recent call last):
> File "clustered_segmentation.py", line 21, in
> w.r…
-
It would be wonderful if DeepSpeech models could be converted to CoreML, for offline use in apps. Here is documentation to do just that. https://developer.apple.com/documentation/coreml/converting_tra…
-
Hi,
running `python bin/sample_eval1.py MFCC feats`, gives me the following error
MFCC is the sample MFCCs provided in this repo.
```
Processing task across_talkers
Preprocessing... Writin…