NVIDIA / NeMo

A scalable generative AI framework built for researchers and developers working on Large Language Models, Multimodal, and Speech AI (Automatic Speech Recognition and Text-to-Speech)
https://docs.nvidia.com/nemo-framework/user-guide/latest/overview.html
Apache License 2.0
11.37k stars 2.37k forks source link

Not able to run basic inference using canary-1b model #8646

Closed P15V closed 4 months ago

P15V commented 5 months ago

Hello all, hope all is well with whoever is reading this!!

I'm very excitedly trying to run a simple inference with this new Canary-1b model that advertises a lower WER than the large whisper model.

Unfortunately, not one thing I am trying is working so far.

I'm going off the Nvidia website directions:

https://nvidia.github.io/NeMo/blogs/2024/2024-02-canary/#transcribing-with-canary

My code bellow:

Load Canary model

from nemo.collections.asr.models import EncDecMultiTaskModel canary_model = EncDecMultiTaskModel.from_pretrained('nvidia/canary-1b')

Finally transcribe

transcript = canary_model.transcribe(paths2audio_files="/home/pjstimac/NvidiaCanaryTest/transcribe_manifest.json", batch_size=16)

My json file: "{

"audio_filepath": "PathRemovedDueToPersonalName",  
"duration": 30.0,  
"taskname": "asr",  
"source_lang": "en", 
"target_lang": "en", 
"pnc": 'yes',

} " using this json in the expected schema going by the Nvidia page, I get this error :

"json.decoder.JSONDecodeError: Expecting property name enclosed in double quotes: line 2 column 1 (char 2)"

Removing the JSON indents and trailing lines, I get this error: "json.decoder.JSONDecodeError: Expecting property name enclosed in double quotes: line 2 column 1 (char 2)"

A clear and concise description of what the bug is. "json.decoder.JSONDecodeError: Expecting value: line 1 column 178 (char 177)"

And directly fixing the errors with the JSON file directly, leads to this error: "-packages/nemo/collections/common/data/lhotse/nemo_adapters.py", line 84, in iter text=data[self.text_field], KeyError: 'answer' Transcribing: 0it [00:00, ?it/s]"

It seems rather wonky, seeing as I'm trying to perform ASR but getting an error that a 'answer' key is expecting.

Steps/Code to reproduce bug Just trying to run a simple inference with a WAV file.

Expected behavior getting a transcription output of the WAV file.

Environment overview (please complete the following information)

Environment details

If NVIDIA docker image is used you don't need to specify these. Otherwise, please provide:

Thanks all for reading, and any input. Appreciate the help!! :)

LL-AI-dev commented 5 months ago

@P15V make sure that there are no line breaks between the curly brackets. I believe this will solve your error

nemo manifests despite being saved as .json are a bit fiddly at times and cannot be directly loaded in via the json package. each line within the manifest should be a complete json string.

I think the error is occurring because it is trying to read the 2nd line as a json string, but because you have line breaks within the curly brackets the 2nd line is not a valid json string on its own.

titu1994 commented 5 months ago

This is the right answer, we read a file as a jsonl, though we (wrongly) call it a .json file. We'll attempt to make the parsing logic a bit more robust in the future, and log an appropriate warning instead of crashing. Fyi @stevehuang52

krishnacpuvvada commented 5 months ago

And directly fixing the errors with the JSON file directly, leads to this error: "-packages/nemo/collections/common/data/lhotse/nemo_adapters.py", line 84, in iter text=data[self.text_field], KeyError: 'answer'

to fix this, please modify the input lines file to add 'answer' field.

{
  "audio_filepath": "PathRemovedDueToPersonalName",  
  "duration": 30.0,  
  "taskname": "asr",  
  "source_lang": "en", 
  "target_lang": "en", 
  "pnc": 'yes', 
  "answer": 'na',
}

also, we recently updated .transcribe signature, so if you are using main branch

transcript = canary_model.transcribe(paths2audio_files="/home/pjstimac/NvidiaCanaryTest/transcribe_manifest.json", batch_size=16)

should be updated to transcript = canary_model.transcribe(audio="/home/pjstimac/NvidiaCanaryTest/transcribe_manifest.json", batch_size=16)

P15V commented 5 months ago

Hello all,

Thanks for your time & replies; I can't express how much I genuinely appreciate it!! :)

So after I made this post, I went home and tried all night on my personal time; still no luck unfortunately.

I found that "answer: 'na' " key via the Hugging face documentation and included that.

@krishnacpuvvada, @LL-AI-dev Running that format JSON, with the updated transcript variable, still prints out the json error: "json.decoder.JSONDecodeError: Expecting property name enclosed in double quotes: line 2 column 1 (char 2)"

I go off the error, and correct that with the JSON, and still get this error (and ran into this last night as well) " assert isinstance(cut, MonoCut), "Expected MonoCut." AssertionError: Expected MonoCut."

Tried both a JSON & JSONL file, same result : " assert isinstance(cut, MonoCut), "Expected MonoCut." AssertionError: Expected MonoCut."

My updated code with that updated variable: "" # Load Canary model from nemo.collections.asr.models import EncDecMultiTaskModel canary_model = EncDecMultiTaskModel.from_pretrained('nvidia/canary-1b')

transcript = canary_model.transcribe(audio="/home/NameRemoved/NvidiaCanaryTest/transcribed_manifest.jsonl", batch_size=16) ""

Last night, I thought, let's try the tutorial Google Colab Notebooks right from the Nvidia website for any NeMo model... Not even that could run all the way through right on google colab.

"https://colab.research.google.com/github/NVIDIA/NeMo/blob/stable/tutorials/asr/Offline_ASR.ipynb"

It was error out at this variable "paths2audio_files=files"

Thanks for everyone's time; much appreciated!!!! :D

titu1994 commented 5 months ago

The notebook above is not the way to do inference for Canary, it's for beam search with CTC models, and is a deprecated notebook in general.

P15V commented 5 months ago

@titu1994 Well that would explain it! After much trial and error, I finally got it running in a notebook & python shell. with 3 lines of code, skipping the JSON/JSONL manifest entirely: "" import nemo.collections.asr as nemo_asr nemoasr_model = nemo_asr.models.EncDecCTCModelBPE.from_pretrained("nvidia/canary-1b") nemoasr_model.transcribe(['AudioClipDirectly.wav']) ""

Once that was working, I just wrote a Python loop to loop through an audio directory and output the transcription results to a JSON for my viewing/model comparison setup app.

Thanks for all the attempts at help though, @titu1994 & @krishnacpuvvada. I genuinely appreciate it!! Wish the documentation was better so I would not have had to bother you guys, oh well.

be well!!! :D

titu1994 commented 5 months ago

Glad it worked. We'll iron out these issues in the pre-release. It shouldn't be so difficult to do inference

P15V commented 5 months ago

@titu1994 That would be so great to see!! I've been playing around with Whisper for the past few months, and this was unexpectedly annoying to get going in comparison. the code is simple enough looking at it; the documentation though, is a different story. But it's going now on my end, yay!! :) Thanks again for the input and help @krishnacpuvvada & @titu1994 , much appreciated!!

github-actions[bot] commented 4 months ago

This issue is stale because it has been open for 30 days with no activity. Remove stale label or comment or this will be closed in 7 days.

github-actions[bot] commented 4 months ago

This issue was closed because it has been inactive for 7 days since being marked as stale.

Suma-Rajashankar commented 3 months ago

import nemo.collections.asr as nemo_asr nemoasr_model = nemo_asr.models.EncDecCTCModelBPE.from_pretrained("nvidia/canary-1b") nemoasr_model.transcribe(['AudioClipDirectly.wav'])

I tried running this script, but when I transcribe the audio, the transcription initially for the first two lines is fine and after that the same sentence keeps repeating again and again. The entire audio is not transcribed too. Any help on this would be appreciated. Thank you.

P15V commented 3 months ago

@Suma-Rajashankar How long are the audio clips you are inputting, and at what sample rate? I was working with Whisper previously, so I already had all my audio cut into 30-second chunks, capped at 16kHz. I, too, have experienced repetition with specific clips (And with whispers, too). But something seems off it it is every single time. I used to get that with Whisper trying to feed it audio clips longer than 30 seconds each, it would constantly repeat after the first or second sentence. But newer updates to Whisper seemed to have improved that; Have not tried the same with Canary myself.

Suma-Rajashankar commented 3 months ago

@P15V, thank you for your reply. My audio clips are between 30 - 60 mins and are all sampled at 16kHz. I had no issues with Whisper, but like you said I will chunk my audio into 30 seconds and work on this using the Canary model. Thanks once again.

stevehuang52 commented 3 months ago

Hi @Suma-Rajashankar @P15V , we have a script to automatically chunk the long audios and perform inference on each chunk. Please feel free to try it and let us know if there's any issue.

Suma-Rajashankar commented 3 months ago

Thanks @stevehuang52 for your reply. Looking into this now. Will keep you posted. Appreciate your help.

Suma-Rajashankar commented 3 months ago

Hi @stevehuang52, I am unable to import 'FrameBatchMultiTaskAED' from 'nemo.collections.asr.parts.utils.streaming_utils'. I have installed nemo_toolkit==1.23.0. Is there some issue with this version?

stevehuang52 commented 3 months ago

@Suma-Rajashankar sorry I forgot to mention that you'll need to install the current main branch of NeMo, not 1.23.0

Suma-Rajashankar commented 3 months ago

@stevehuang52, thanks very much. Will work on this and keep you posted. Appreciate your help.