-
`torchaudio` is an extension library for PyTorch, designed to facilitate audio processing using the same PyTorch paradigms familiar to users of its tensor library. It provides powerful tools for audio…
-
- simple: 1 segmentation, pipo.descr
- full: 3 segmentations, pipo.descr
- mosaicing: MFCC
Extra points: only 1 mubu.process
-
As Title, MFCC is not listed in Sonic Visulizer.
I am using the pre-compiled library in builds/osx/mir-edu.dylib.
-
Hi @Deeperjia , first, this is a great project, it's really help, thank you for your great work.
there is one issue: in `test.py` SpeechLoader initialized without `label_file`
```python
speech_l…
-
File "action_detect\reader\audio_reader.py", line 61, in create_reader
examples = feature_extractor.wav_to_example(audio_data, self.sample_rate)
File "action_detect\mfcc\feature_extractor.py…
czzzy updated
4 months ago
-
Hi, I am trying to get the MFCC features from a sample .wav audio file, but my output doesn't match the one from LibRosa library in python(this is critical as my CoreML model is trained with LibRosa's…
-
Hello,
I have problem when I run the train.py script with ASVspoof2017 data like below
I've edited extract_mfcc method in extract_feature.py because
librosa.feature.delta(mfcc) shows
**libro…
-
-
Hello,
I am trying to replicate the MFCC output of [Librosa](https://librosa.org/doc/main/generated/librosa.feature.mfcc.html), which is widely used as the reference library for audio manipulation.…
-
Τρέχω την make_mfcc.sh γράφοντας
steps/make_mfcc.sh data/train data/train/log data/train/data
και μου πετάει το παρακάτω error
![image](https://user-images.githubusercontent.com/57544843/714906…