Open AlessandroLanzoni1991 opened 3 months ago
Thanks @AlessandroLanzoni1991 for your feedback. I was able to verify the issue. However, I noticed that it is an expected result as it is right now configured.
When creating a new SFSpeechAudioBufferRecognitionRequest();
in the code below, Apple by default sets the ShouldReportPartialResults to True. I tested it by running the sample project of the MCT and setting that property to false and it worked OK. The only downtime of that is that it has a delay which prevents to show the text in real time.
By saying that, I think it will be ideal to add a property where we can set that to True or False based on the user's requirements.
Thanks
I am experiencing the same issues, Is there any resolution yet?
@vhugogarcia ..I cloned the ios implementation and made the changes to set ShouldReportPartialResults to false and it works, no more truncation. How do we get this into release?
Working on it
Is there a workaround for this currently?
Is there an existing issue for this?
Did you read the "Reporting a bug" section on Contributing file?
Current Behavior
I am currently experiencing an issue with SpeechToText on iOS 17 where during voice recognition, certain words are either cut off or not recognized correctly. For example: "Alessandro" become Al or Alex. "Planification" become "plan". "Configuration" become "config"
Expected Behavior
All spoken words should be transcribed accurately without being cut off or distorted during speech recognition.
Steps To Reproduce
Link to public reproduction project repository
https://github.com/jfversluis/MauiSpeechToTextSample
Environment
Anything else?
This works well on my iPhone 8 whit iOS 16.7.6