Open AppDevGuy opened 4 years ago
Hello, probably the wrong place to ask but i've read the docs and as i understand this is how you start and stop the recording:
let speechKit = OSSSpeech.shared
let utterance = OSSUtterance.init(string: "test")
var body: some View {
PluckPage()
.onAppear {
speechKit.voice = OSSVoice(quality: .enhanced, language: .English)
speechKit.speakText("Pick Pallet")
speechKit.recordVoice()
}
.onDisappear{
speechKit.endVoiceRecording()
print(utterance.speechString)
}
}
Can you please hint me in the right direction as to why the spoken text is not printed out? This is the error i get:
****************** Begin Debug Log ******************
Class: <OSSSpeech.swift>
Function: utteranceIsValid()
Line: #317
Object: <OSSSpeechKit.OSSSpeech: 0x28108e980>
Log Message: No valid utterance.
****************** End Debug Log ******************
2023-02-25 21:03:26.095286+0100 project[1024:40123] [catalog] Unable to list voice folder
2023-02-25 21:03:26.168890+0100 project[1024:40120] [catalog] Query for com.apple.MobileAsset.VoiceServices.VoiceResources failed: 2
2023-02-25 21:03:26.219712+0100 project[1024:40151] [AXTTSCommon] Timed out loading Siri voice resource for language en-GB
2023-02-25 21:03:26.274363+0100 project[1024:40123] [catalog] Unable to list voice folder
2023-02-25 21:03:26.279924+0100 project[1024:40123] [catalog] Unable to list voice folder
The TL;DR: have a look at the example project to see how the delegate functions are implemented. That's what you're missing.
You're defining an utterance but not setting it on the Speech sessions:
speech.utterance = utterance
Looks like you're also calling speech to speak and record in the view appearing at the same time which might be an issue. Then you call it to end recording on disappear which might be another issue, but you're not calling '.speak()' so you might be recording your voice but you're not calling to speak when the recording finishes.
The error message implies that you had it record for a long time and it timed out. There's a time limit on microphone access for speech recognition that Apple have defined which I think was between 30 and 60 seconds.
If you're using a real device and have given access to both the microphone and the ability to use speech recognition, I'd recommend making a button with an action that calls to perform the functions.
Finally, I don't see where you're setting the delegates that handle informing the class of events, so you're not receiving the speech to text content anywhere.
Given we set the state of the recording active or inactive providing an
isRecording
property would be great.