w3c / ruby-t2s-req

Text to Speech of Electronic Documents Containing Ruby: User Requirements
https://w3c.github.io/ruby-t2s-req/
Other
0 stars 4 forks source link

Sending both the base text and ruby text to the text-to-speech engine #9

Open murata2makoto opened 2 years ago

murata2makoto commented 2 years ago

@cookiecrook wrote:

I agree there are edge cases, and that the は example is likely to be pronounced better if the base text is sent to the text-to-speech engine. However, once the text-to-speech engines understand ruby context, I think exposing both (to be pronounced as a single instance) is likely to result in better results, not worse. Ruby-unaware speech engines should just attempt to pronounce the base text in those instances of "phonetic-optional."

I also think that sending both the base text and phonetics ruby text to the text-to-speech engine would be useful. But I am not aware of any text-to-speech APIs that can send both.

One idea is to use Unicode characters for ruby shown below. Then, text-only APIs would be good enough.

Code point FFF9 (hex)—Interlinear annotation anchor—marks start of annotated text Code point FFFA (hex)—Interlinear annotation separator—marks start of annotating character(s) Code point FFFB (hex)—Interlinear annotation terminator—marks end of annotated text

But most engines would simply ignore these characters and read aloud both the base text and ruby text, which is usually very bad.

cookiecrook commented 2 years ago

Yes. My suggestion above is out of context, but this possibility would require updates to 1) the Ruby spec, 2) Web Engines, 3) Speech Engines, and possibly 4) Assistive Technology like screen readers that may interface between 2 and 3.

Your suggestion could work, but would require updates to either the Speech Engine (or some intermediary service.)

aleventhal commented 2 years ago

Are speech engines even the right place to implement heuristics? They can lack context. For example, when the user is navigating by word or character, there is much less context. It's possible that a sentence, paragraph or even the entire document is the most useful context for applying ML.

Also, if the rules are applied at a higher level (in the browser or AT for example), then TTS APIs would not need to change.

murata2makoto commented 2 years ago

@aleventhal

Are speech engines even the right place to implement heuristics?

It is not clear to me how TTS engines and user agents (or other ATs) interact. I am not aware of any documents that describe their interactions. In the Japan DAISY consortium, we tried to create a document (in Japanese) but I admit that it is still immature although it may contain some useful information about Japanese TTS.

aleventhal commented 2 years ago
murata2makoto commented 4 months ago

The latest draft has a note:

NOTE This option does not necessarily ignore ruby annotations. Although text-to-speech engines mainly use ruby bases, they may also use ruby annotations as a hint.

Is this good enough to close this issue?