Open dineshtripathi30 opened 9 months ago
@dineshtripathi30, sorry to hear that you're experiencing this issue. Just to check:
<host-ip>:8090
rather than localhost:8090
, did you navigate to chrome://flags
(or the equivalent in another browser), find "Insecure origins treated as secure," enter <host-ip>:8090
into the appropriate box, and click on the "Relaunch" button?I did setup RIVA server myself and yes, ASR is enabled in English (US) and it's selected in the menu., for 4,5,6 points in above comment, yes i did.
Also regarding point 6. I just tried and transcribe works here.
@dineshtripathi30, OK, good to know that you've covered your bases, and that your microphone is working properly. Let's check your Riva ASR service. Can you do the following?
.wav
file
import riva.client
def run_asr_streaming_inference(audio_file, output_file, uri='localhost:50051'): with open(audio_file, 'rb') as fh: data = fh.read()
auth = riva.client.Auth(uri=uri)
client = riva.client.ASRService(auth)
offline_config = riva.client.RecognitionConfig(
language_code="en-US", # Change this as appropriate
max_alternatives=1,
enable_automatic_punctuation=True,
)
streaming_config = riva.client.StreamingRecognitionConfig(config=offline_config, interim_results=False)
with riva.client.AudioChunkFileIterator(
audio_file,
1600,
delay_callback=riva.client.sleep_audio_length,
) as audio_chunk_iterator:
riva.client.print_streaming(
responses=client.streaming_response_generator(
audio_chunks=audio_chunk_iterator,
streaming_config=streaming_config,
),
output_file=output_file,
additional_info='no',
file_mode='w',
word_time_offsets=False,
)
return
audio_file = '
If your Riva server is working properly, it should print a transcription of your audio file to your screen.
I recorded " Tell me about Lenovo SE450 server" in an audio file.
Here is test result from notebook, "The way it transcribe 450 is not correct but still it transcribe rest of it correctly.
But Here is what when i try from chatbot web UI Tell me about Lenovo as a 50 server.
OK, I just tried asking my chatbot web UI, "Tell me about Lenovo SE450 Server." After several renderings of my query as "Tell me about Lenovo S. Four fifty server," I eventually got "Tell me about Lenovo Se Four Fifty Server." I had thought that saying "SE450" more slowly than the rest of the query would improve the transcription, but it appears I was wrong. I think I need to consult with the Riva ASR engineers about this.
So as to better scope out the issue, can you ask the chatbot web UI about other technical products and services and compare the ground truth queries to the generated transcriptions?
Hi @dineshtripathi30 the issue is with Inverse Text Normalization. You could generate new tokenizer and verbalizer files from https://github.com/NVIDIA/NeMo-text-processing/tree/en_tech and use them in your Riva server build. This should resolve the issue you are having. You can refer to the documentation in https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/nlp/text_normalization/wfst/wfst_text_processing_deployment.html.
Anand, In my case, problem is not only with that, but its with other transcription as well.
e.g. I said " What do you know about Lenovo"
and it transcribe What about Lenovo?
Next I asked " Tell Me About Meta Lama 13 Billion Parameter Model" and it transcribe correctly.
Next I asked " What do you think about Generative AI" and it transcribe "Do you think about generative Ai? "
So, accuracy is the issue in my case.
I am trying to use RIVA ASR with frontend as given in example, it fails to transcribe speech to text. Most of the time it fails catch my voice correctly.