Closed ArthurDavidson closed 2 years ago
It's an issue on microsoft's side. Neither aspeak or the web page is working for me.
The web page no longer contains the token. That's why the functionality is broken now.
Is there another way?
It seems microsoft side is normal now. But they have changed the place where the token is stored. A fix is on the way.
The good news is that microsoft side no longer requires an auth token.
But I don't know whether they have downgraded the free services. There is a TrafficType: AzureDemo
header in the request now.
I have got it working now. A new version of aspeak
will be published soon to address this issue.
Thanks a lot!
I have uploaded a preview version (3.0.0.dev1) to pypi. You can use it if you can't wait. 🎉
pip install aspeak==3.0.0.dev1
I updated the major version since the change introduced on microsoft side caused some breaking changes to aspeak
API.
I will publish a stable version after some refactors and testings.
Could you include an example without token being used also?
Could you include an example without token being used also?
Currently the old examples still works. They don't depend on the token retrieval logic directly.Instead, the code that depends on Token
or get_auth_token
directly will break now
I will rewrite a cleaner API for v3.0 while keeping the old ones(marked as obsolete).
The examples work well but it takes more time to get the audio, can I instead infer straight to speechsdk.SpeechSynthesizer instead of speechprovider? Could you give me an example?
The examples work well but it takes more time to get the audio, can I instead infer straight to speechsdk.SpeechSynthesizer instead of speechprovider? Could you give me an example?
Yes. SpeechProvider
is a useless layer of abstraction now because we no longer need to refresh the token. A new API will arrive in v3.0.
You can read the code yourself and figure out how to do it if you can't wait until v3.0 is released.
If the text exceeds 300 words, it will fail
The examples work well but it takes more time to get the audio, can I instead infer straight to speechsdk.SpeechSynthesizer instead of speechprovider? Could you give me an example?
Yes.
SpeechProvider
is a useless layer of abstraction now because we no longer need to refresh the token. A new API will arrive in v3.0.You can read the code yourself and figure out how to do it if you can't wait until v3.0 is released.
If the text exceeds 300 words, it will fail
Did you encounter this specific error?
Error: Speech synthesis canceled: CancellationReason.Error
WebSocket operation failed. Internal error: 3. Error details: WS_ERROR_UNDERLYING_IO_ERROR USP state: 4. Received audio size: 2058600 bytes.
It seems that microsoft has downgraded the free services and they force the websocket to stop when 2058600 bytes of audio is received.
Edit: Microsoft includes a new notice on the web page: This demo supports a maximum input length of 1000 characters.
If the text exceeds 300 words, it will fail
Did you encounter this specific error?
Error: Speech synthesis canceled: CancellationReason.Error WebSocket operation failed. Internal error: 3. Error details: WS_ERROR_UNDERLYING_IO_ERROR USP state: 4. Received audio size: 2058600 bytes.
It seems that microsoft has downgraded the free services and they force the websocket to stop when 2058600 bytes of audio is received.
Edit: Microsoft includes a new notice on the web page:
This demo supports a maximum input length of 1000 characters.
It would be perfect if it could help users automatically segment text and audio combinations. For example, Chinese characters can be 200 characters.
It would be perfect if it could help users automatically segment text and audio combinations. For example, Chinese characters can be 200 characters.
I don't think segmenting the text is aspeak's responsibility. There are too many languages, and simply segmenting text by 200 characters will produce bad results if the first/last sentence is cut in the middle.
Closing this because v3.0.0 fixed it.
Raise this error today
https://azure.microsoft.com/en-us/services/cognitive-services/text-to-speech/#overview works normal in browser in same machine.