Open ambiSk opened 1 month ago
"audio" is an argument only of NeMo 2.0, which is the current main branch, and only it supports tensors.
The old 1.23 Nemo version only supports path2audio_files and does not accept tensors.
Okay, even after install from source, its not able to transcribe the whole tensor as it used to earlier. Still its only transcribing first 100000 samples of the waveform
What's the error trace ? Or just finished after 100K. Btw if you transcribe that much data you run the risk of OOM CPU ram. You might want to try the new transcribe_generator() instead if it's OOM you're facing
This is my code:
import nemo.collections.asr as nemo_asr
import torch
asr_model = nemo_asr.models.EncDecCTCModelBPE.from_pretrained(model_name="stt_hi_conformer_ctc_medium", map_location=torch.device('cuda:0'))
aud, sr = torchaudio.load("1.wav") # Audio is of 35 seconds
aud = torchaudio.functional.resample(aud, sr, 16000)
aud = aud.mean(dim=0)
# Try 1
out = asr_model.transcribe(audio=aud)
print(out[0])
## output: 'फाइनेैंस मैनेजमेंट का सबसे बेसिक कॉन्सेप्ट है इन्वेस्टि को समझना और सी तरीके से इम्प्लीमेंट करना मतलब है अपने इ प्ान बना कै पे इसतेमाल किए जाएंगे'
### Expected:
## 'फाइनेंस मैनेजमेंट का सबसे बेसिक कॉन्सेप्ट है बजटिंग सेविंग और इन्वेस्टिंग को समझना और सही तरीके से इम्प्लीमेंट करना बजटिंग का मतलब है अपने इनकम और एक्सपेंसेस को ट्रैक करना और एक प्लान बनाना कि कैसे पैसे इस्तेमाल किए जाएंगे सेविंग में हम पैसों का एक हिस्सा अलग करके फ्यूचर के लिए रखते हैं और इन्वेस्टिंग में हम पैसों को ग्रोथ के लिए अलग अलग तरीकों से इस्तेमाल करते हैं जैसे स्टॉक्स बॉन्ड्स या रियल एस्टेट में इन्वेस्ट करके जब हम इन तीनों को सही तरीके से मैनेज करते हैं तब हम अपनी फाइनेंशियल स्टेबिलिटी को इम्प्रूव कर सकते हैं'
# Try 2:
config = nemo_asr.parts.mixins.transcription.TranscribeConfig(batch_size = 1)
gen_out = asr_model.transcribe_generator(aud,config)
print(next(gen_out))
# output: ['फाइनेैंस मैनेजमेंट का सबसे बेसिक कॉन्सेप्ट है इन्वेस्टि को समझना और सी तरीके से इम्प्लीमेंट करना मतलब है अपने इ प्ान बना कै पे इसतेमाल किए जाएंगे']
@titu1994 as we can clearly see, in both the outputs its not transcribing whole audio
I see now. In both case, a dummy data loader is used which has duration set to 100000 - this doesn't matter, the model computes the actual duration on the fly. Ignore the 100000.
Have you listened to the audio file yourself ? 35 second audio file and that much expected text - it is possibly spoken far too fast, or the resample is causing a bug causing the model to be unable to predict properly. Write the file to disk after resampling and hear the audio fully to see if there's issues in it.
Yes, the audio has continuous speech, at normal rate. I have resampled the audio using ffmpeg to 16000 with mono
@titu1994 can you tell me how I can use the model to transcribe at least a 48 seconds audio? If its 16k Hz, and a mono sample
This issue is stale because it has been open for 30 days with no activity. Remove stale label or comment or this will be closed in 7 days.
Description: I updated NeMO to 1.23.0, and trying to use pretrained
EncDecCTCModel.transcribe
. In previous version I used to input audio tensors loaded using torchaudio. But now it asks forpaths2audios_label
, when I input filepath, it doesn't transcribe the whole file but first 100000 datapoints. When I looked into Nvidia latest documents. There was no reference topaths2audio_files
but instead the argument wasaudio
which took tensor. How to get that functionality back to transcribe whole file.Steps/Code to reproduce bug
Expected behavior We get the transcription when we give path in a list, since giving tensor, we are getting tensor can't we converted to JSON.
Environment overview (please complete the following information)
pip install "nemo-toolkit[all]"
Environment details
If NVIDIA docker image is used you don't need to specify these. Otherwise, please provide:
Additional context
GPU Model: Tesla V100