w3c / wcag21

Repository used during WCAG 2.1 development. New issues, Technique ideas, and comments should be filed at the WCAG repository at https://github.com/w3c/wcag.
https://w3c.github.io/wcag/21/guidelines/
Other
142 stars 55 forks source link

SC 1.2.6 Sign Language: Priority Level and text description #508

Closed edsonrufino closed 6 years ago

edsonrufino commented 6 years ago

In "Success Criterion 1.2.6 Sign Language (Prerecorded)" https://www.w3.org/TR/WCAG21/#sign-language-prerecorded: Proposed Priority Level: AA (Double A) In the description text of "Success Criterion 1.2.6 Sign Language (Prerecorded)", add the sentence "human or automatic", like this: Human or automatic sign language interpretation is provided for all prerecorded audio content in synchronized media.

The advent of assistive technologies for translating text into sign languages ​​solved an old problem for accessibility: the difficulty of translating all contents by human sign language interpreters, as well explained @GreggVan in issue #281:

"Level AA was for everything else important, testable, generally applicable to all content Level AAA was for things important, testable, but not generally applicable (could not be applied in all cases.) In one case, it was because there were not enough sign language interpreters in existence to provide sign language interpretation for all web pages. This was for people wanting to go the extra mile."

However, there are currently several applications that use avatars as sign language interpreters to automatically translate text effectively, such as VLibras (open source, http://vlibras.com.br), HandTalk (commercial software with free version, https://handtalk.me/) and ProDeaf (commercial software with free version, http://www.prodeaf.net/en-us/Home).

One can also argue that the use of sign language goes far beyond the "extra mile". Signal languages ​​on the web are necessary for many reasons:

(full explanation below as well as reference links)

Therefore, the additional improvements are proposed:

proposal A - issue #507 : Guideline 1.1 Text Alternatives https://www.w3.org/TR/WCAG21/#text-alternatives Provide text alternatives for any non-text content so that it can be changed into other forms people need, such as large print, braille, speech, symbols, human or automatic sign language interpretation or simpler language.

proposal C - issue #510: Glossary: https://www.w3.org/TR/WCAG21/#glossary assistive technology (as used in this document) (...) Assistive technologies that are important in the context of this document include the following:

proposal D - issue #511: Glossary: https://www.w3.org/TR/WCAG21/#glossary human or automatic sign language interpretation translation of one language, generally a spoken language, into a sign language, made by human interpreters or assistive technologies. True sign languages are independent languages that are unrelated to the spoken language(s) of the same country or region.

Arguments and motivations for providing sign language video on the Web: DEBEVC, Matjaž; Kosec, Primož; Andreas Holzinger. Improving multimodal web accessibility for deaf people: sign language interpreter module. https://online.tugraz.at/tug_online/voe_main2.getVollText?pDocumentNr=137845&pCurrPk=47669 – Demographics data. – Literacy and access to information. – Reading ability. – Navigation ability. – Multilanguage requirements.

Demographics data (http://www.who.int/mediacentre/factsheets/fs300/en/) According to World Health Organization (WHO, 2017): “Over 5% of the world’s population – 360 million people – has disabling hearing loss (328 million adults and 32 million children). Disabling hearing loss refers to hearing loss greater than 40 decibels (dB) in the better hearing ear in adults and a hearing loss greater than 30 dB in the better hearing ear in children. The majority of people with disabling hearing loss live in low- and middle-income countries. Approximately one third of people over 65 years of age are affected by disabling hearing loss. The prevalence in this age group is greatest in South Asia, Asia Pacific and sub-Saharan Africa. 'Hard of hearing' refers to people with hearing loss ranging from mild to severe. People who are hard of hearing usually communicate through spoken language and can benefit from hearing aids, cochlear implants, and other assistive devices as well as captioning. People with more significant hearing losses may benefit from cochlear implants. 'Deaf' people mostly have profound hearing loss, which implies very little or no hearing. They often use sign language for communication.”

Literacy and access to information According to Debevc, Kosec and Primož (2010): “Based on data collected by the World Federation of the Deaf (WFD), around 80% of deaf people worldwide have an insufficient education and/or literacy problems, lower verbal skills and mostly chaotic living condition. Other research shows that deaf people are often faced with the problem of acquiring new words and notions [25, 27]. Because of the many grammar differences between their mother tongue and sign language, the deaf person might be fluent in sign language while experiencing problems reading their mother tongue. According to Holt, the majority of deaf 18-year-olds in the United States have poorer reading capabilities of English in comparison to 10-year-old students who are not deaf [18]. Some studies that have examined the reading ability of deaf 16-year-olds have shown that about 50% of the children are illiterate. Of these, 22% displayed a level of knowledge equivalent to that of a 10-year-old child who is not deaf and only 2.5% of participants actually possessed the reading skills expected for their age [14]. Also, other studies in the United States by Farwell have shown that deaf people face difficulties when reading written text [11]. The average literacy rate of a high school graduate deaf student was similar to that of a non-deaf student in the third or fourth grade. Access to information is also important in cases of emergency. Past disasters around the world have shown that, at the time of an accident, people with disabilities did not receive the same assistance and appropriate information as other people did. The United Nation Convention [38] calls upon States to develop measures for emergency services (article 9 (1) (b)). Messages using video for deaf people have rapidly become one of the more popular methods for sending them information but, unfortunately, most countries’ emergency services do not allow for video communications with deaf people. The reason lies in the communication protocols, which are not compatible with each other.”

Reading ability According to Debevc, Kosec and Primož (2010): “It is surprising and disappointing that many deaf and hard-of-hearing people, particularly those for whom sign language is the first language, have reading difficulties [13]. The problem arises because writing was developed to record spoken language and therefore favours those with the ability to speak. Spoken language contains phonemes which can be related to the written word by mind modelling. Due to the lack of audible components and consequently difficulties in understanding written words, this can not be done by deaf people.”

Navigation ability According to Debevc, Kosec and Primož (2010): “Another motivation for the integration of sign language videos into web sites is that sign language improves the taxonomic organization of the mental lexicons of deaf people. The use of sign language on the Web would, in this case, reduce the requirements for the rich knowledge of the words and notions of another language. A knowledge of words and notions is of the utmost importance for navigation in and between web sites and for the use of hyperlinks, such as in online shopping web sites where a lot of different terms and subterms in vertical and horizontal categories appear. Unfortunately, deaf people have problems understanding the meaning of words and notions, especially when it is necessary to understand certain notions in order to correctly classify and thus understand either a word or another notion [25].

Multilanguage requirements According to Debevc, Kosec and Primož (2010): “One of the important requirements is multilanguage support, especially in Europe. For example, tourist information, governmental and emergency service web designers need to construct web sites in English, German, Italian, French and even in Hungarian. This is particularly true for small countries such as Slovenia, which are surrounded by several countries with rich language backgrounds. In some countries sign language is also recognized as an official national language, and therefore there is a strong need to include sign language translations into web sites.”

References: DEBEVC, Matjaž; Kosec, Primož; Andreas Holzinger. Improving multimodal web accessibility for deaf people: sign language interpreter module (2010). https://online.tugraz.at/tug_online/voe_main2.getVollText?pDocumentNr=137845&pCurrPk=47669. Accessed in 6 out. 2017. World Health Organization. Deafness and hearing loss (2017). http://www.who.int/mediacentre/factsheets/fs300/en/. Accessed in 6 out. 2017.

DavidMacDonald commented 6 years ago

My understanding is that avatars have a long way to go before the deaf community accept them in place of live interpreters.

GreggVan commented 6 years ago

correct

and all of the information needed to drive them is already in 1.1 (very deliberately) so that it was there when the avatars became acceptable. The rest would be done on the browser end.

Like screen readers - the goal is to make the content work with them — but not to require that content authors provide the screen readers (or in this case the text to sign language avatars)

g

Gregg C Vanderheiden greggvan@umd.edu

On Oct 10, 2017, at 12:50 AM, David MacDonald notifications@github.com wrote:

My understanding is that avatars have a long way to go before the deaf community accept them in place of live interpreters.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/w3c/wcag21/issues/508#issuecomment-335360173, or mute the thread https://github.com/notifications/unsubscribe-auth/AJph3kzKr2Km6gCnCNm8RLVwrxl82Dvxks5sqveigaJpZM4PzUe4.

edsonrufino commented 6 years ago

Dear @DavidMacDonald and @GreggVan ,

I sincerely don't know if is possible to say that deaf people accept or not avatars as sign language interpreters on behalf of them. We have to consider that the comparative between deaf and blind people is not appropriate, because blind people typically knows alphabet, words and associated concepts, what is usually not true in the case of deaf people (as the references in my proposal has shown).

Definitely is not a question of accepting: if there is no human sign language interpretation and you don't can read the written text, it is a insurmountable obstacle to access information on the web if you don't have a automatic resource to translation.

Thus, the inclusion of explicit citation to assistive technologies related to sign language translation is not a substitution of human interpreter.

There are many publications related to the assistive technologies category: https://link.springer.com/article/10.1007/s10111-016-0385-z https://dl.acm.org/citation.cfm?id=2820460 http://bd.centro.iff.edu.br/xmlui/bitstream/handle/123456789/1109/Vers%c3%a3oDefesa_marco.docx?sequence=1&isAllowed=y (portuguese) https://translate.google.com.br/translate?sl=pt&tl=en&js=y&prev=_t&hl=pt-BR&ie=UTF-8&u=http%3A%2F%2Fbr-ie.org%2Fpub%2Findex.php%2Fsbie%2Farticle%2Fview%2F2942&edit-text=&act=url https://translate.google.com.br/translate?sl=pt&tl=en&js=y&prev=_t&hl=pt-BR&ie=UTF-8&u=http%3A%2F%2Fwww.ufal.br%2Fseer%2Findex.php%2Feaei%2Farticle%2Fview%2F2164&edit-text=&act=url https://www.technologyreview.com/lists/innovators-under-35/2016/humanitarian/ronaldo-ten-rio/

Three of these applications are developed in Brazil (one public and open source and two commercial products with free versions). These apps are no longer a novelty.

Now, these technologies allow us to a big upgrade in the accessibility level to deaf people. I would like to suggest that you could consider these aspects.

GreggVan commented 6 years ago

thanks for your note

But I’m not sure I follow your email. So let me summarize what I was trying to say.

I agree with you that not everyone agrees on anything, and that some people who are deaf find avatars currently acceptable and some find that they are not also agree with you that poor interpretation is better than none (which is why people use google translate all the time even thought it doesn’t work great. It is wonderful compared to nothing. ) so it doesn’t matter if some do and some don’t. Those who do think it is good enough to be useful to them should be able to have it available to them.

I don’t understand the comment about equivalency of deaf and blind. I agree they are very different. But also similar in that both need access to text. But this seems to be a side topic so i’ll leave it.

I think the underlying message was lost in the above concerns. The key points of the underlying message are:

WCAG 2.0 already requires that all information be in text form and programmatically determinable form This is all that is needed for sign language avatars to present all textual information on the page in Sign Language

the Sign Language avatar is something that would be installed on the computer (like all AT) so that it could be used to present all text to the user including text that is not on the internet (else how could the person use the computer if they can’t read the text there). Also want the Avatar to work on all web pages — so it is not up to the web page provider to provide the sign avatar but up to the user or the Operating System.

So there didnt seem to be any new SC that was required in order to accomplish the vision that you (and I and many others) have for sign language being available wherever text is available. With quality that just keeps getting better as technology improves. With Human interpretation used whenever it is available — and avatars being used when human interpreters are not. Like in everyone persons home, anytime they want to view online (or on computer) text.

g

Gregg C Vanderheiden greggvan@umd.edu

On Oct 10, 2017, at 7:34 PM, edson rufino de souza notifications@github.com wrote:

Dear @DavidMacDonald https://github.com/davidmacdonald and @GreggVan https://github.com/greggvan ,

I sincerely don't know if is possible to say that deaf people accept or not avatars as sign language interpreters on behalf of them. We have to consider that the comparative between deaf and blind people is not appropriate, because blind people typically knows alphabet, words and associated concepts, what is usually not true in the case of deaf people (as the references in my proposal has shown).

Definitely is not a question of accepting: if there is no human sign language interpretation and you don't can read the written text, it is a insurmountable obstacle to access information on the web if you don't have a automatic resource to translation.

Thus, the inclusion of explicit citation to assistive technologies related to sign language translation is not a substitution of human interpreter.

There are many publications related to the assistive technologies category: https://link.springer.com/article/10.1007/s10111-016-0385-z https://link.springer.com/article/10.1007/s10111-016-0385-z https://dl.acm.org/citation.cfm?id=2820460 https://dl.acm.org/citation.cfm?id=2820460 http://bd.centro.iff.edu.br/xmlui/bitstream/handle/123456789/1109/Vers%c3%a3oDefesa_marco.docx?sequence=1&isAllowed=y http://bd.centro.iff.edu.br/xmlui/bitstream/handle/123456789/1109/Vers%c3%a3oDefesa_marco.docx?sequence=1&isAllowed=y (portuguese) https://translate.google.com.br/translate?sl=pt&tl=en&js=y&prev=_t&hl=pt-BR&ie=UTF-8&u=http%3A%2F%2Fbr-ie.org%2Fpub%2Findex.php%2Fsbie%2Farticle%2Fview%2F2942&edit-text=&act=url https://translate.google.com.br/translate?sl=pt&tl=en&js=y&prev=_t&hl=pt-BR&ie=UTF-8&u=http%3A%2F%2Fbr-ie.org%2Fpub%2Findex.php%2Fsbie%2Farticle%2Fview%2F2942&edit-text=&act=url https://translate.google.com.br/translate?sl=pt&tl=en&js=y&prev=_t&hl=pt-BR&ie=UTF-8&u=http%3A%2F%2Fwww.ufal.br%2Fseer%2Findex.php%2Feaei%2Farticle%2Fview%2F2164&edit-text=&act=url https://translate.google.com.br/translate?sl=pt&tl=en&js=y&prev=_t&hl=pt-BR&ie=UTF-8&u=http%3A%2F%2Fwww.ufal.br%2Fseer%2Findex.php%2Feaei%2Farticle%2Fview%2F2164&edit-text=&act=url https://www.technologyreview.com/lists/innovators-under-35/2016/humanitarian/ronaldo-ten-rio/ https://www.technologyreview.com/lists/innovators-under-35/2016/humanitarian/ronaldo-ten-rio/ Three of these applications developed in Brazil (one public and open source and two commercial products with free versions). These apps are no longer a novelty. One of the is open source and can be adapted to national sign languages.

Now, these technologies allow us to a big upgrade in the accessibility level to deaf people. I would like to suggest that you could consider these aspects.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/w3c/wcag21/issues/508#issuecomment-335638215, or mute the thread https://github.com/notifications/unsubscribe-auth/AJph3qnUtEU7HB7Brbx-Tv8DDRtJay59ks5sq_8BgaJpZM4PzUe4.

edsonrufino commented 6 years ago

Thank you for your response, @GreggVan !

I'll try to clarify and defend the importance of upgrade the level of SC 1.2.6 in a different perspective.

Let's considerate the description of "Understanding" of this SC (https://www.w3.org/TR/UNDERSTANDING-WCAG20/media-equiv-sign.html):

The intent of this Success Criterion is to enable people who are deaf or hard of hearing and who are fluent in a sign language to understand the content of the audio track of synchronized media presentations. Written text, such as that found in captions, is often a second language. Because sign language provides the ability to provide intonation, emotion and other audio information that is reflected in sign language interpretation, but not in captions, sign language interpretation provides richer and more equivalent access to synchronized media. People who communicate extensively in sign language are also faster in sign language and synchronized media is a time-based presentation.

Additionally, we have to consider incidental sounds, like music, noise and others. This is exactly why it is not a matter of consider the installation of assistive technology by deaf people.

Again, how well explained by you in issue #281: "Level AA was for everything else important, testable, generally applicable to all content Level AAA was for things important, testable, but not generally applicable (could not be applied in all cases.) In one case, it was because there were not enough sign language interpreters in existence to provide sign language interpretation for all web pages. This was for people wanting to go the extra mile."

The sign language is considered necessary. Plus, now it is viable with sign language assistive technologies, with free and reliable options.

Conclusion: why not upgrade the level of this success criteria?

GreggVan commented 6 years ago

because AAA talked about adding live human interpreters to the site. This is not practical for all content - there are not enough interpreters to even do that.

And if you are talking about an avatar turning text into sign language — everything the avatar needs is already at Level A. So if people do level A then all the information will be in text form and/or programmatically determinable — so any sign language avatar that a person has — can provide access to all of the text on the page already. It is not practical for every website to provide their own avatar (assistive technology) nor is it a good idea. A person should have the same avatar using the same rules for interpretation - as the move around on the web. Just like a blind person brings their own screen reader to the task and does not have every website build a screen reader into their web page.

make sense?

g

Gregg C Vanderheiden greggvan@umd.edu

On Oct 17, 2017, at 5:14 PM, edson rufino de souza notifications@github.com wrote:

Thank you for your response, @GreggVan https://github.com/greggvan !

I'll try to clarify and defend the importance of upgrade the level of SC 1.2.6 in a different perspective.

Let's considerate the description of "Understanding" of this SC (https://www.w3.org/TR/UNDERSTANDING-WCAG20/media-equiv-sign.html https://www.w3.org/TR/UNDERSTANDING-WCAG20/media-equiv-sign.html):

The intent of this Success Criterion is to enable people who are deaf or hard of hearing and who are fluent in a sign language to understand the content of the audio track of synchronized media presentations. Written text, such as that found in captions, is often a second language. Because sign language provides the ability to provide intonation, emotion and other audio information that is reflected in sign language interpretation, but not in captions, sign language interpretation provides richer and more equivalent access to synchronized media. People who communicate extensively in sign language are also faster in sign language and synchronized media is a time-based presentation.

Additionally, we have to consider incidental sounds, like music, noise and others. This is exactly why it is not a matter of consider the installation of assistive technology by deaf people.

Again, how well explained by you in issue #281 https://github.com/w3c/wcag21/issues/281: "Level AA was for everything else important, testable, generally applicable to all content Level AAA was for things important, testable, but not generally applicable (could not be applied in all cases.) In one case, it was because there were not enough sign language interpreters in existence to provide sign language interpretation for all web pages. This was for people wanting to go the extra mile."

The sign language is considered necessary. Plus, now it is viable with sign language assistive technologies, with free and reliable options.

Conclusion: why not upgrade the level of this success criteria?

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/w3c/wcag21/issues/508#issuecomment-337373877, or mute the thread https://github.com/notifications/unsubscribe-auth/AJph3vmDMPeyfAPWh4nKcWmlKutksPKuks5stRi_gaJpZM4PzUe4.

edsonrufino commented 6 years ago

Thank you one more time, @GreggVan , and I am sorry by delay in my response.

Continuing our conversation: the proposition that content be accessible is sufficient to deaf people access information with their own sign language interpreters software make sense only in theory. Unfortunately, this proposition has many problems to be considered true in real world, as described below.

  1. For example, we can see that is common that today almost all operating systems (desktop/mobile) have internal screen readers with voice synthesizer. This is not true to sign language avatar interpreters software. Thus, these software that are affordable technologies but at this moment less known by many users that could be benefited by them by installation in their computers. It shows that today is important to have web sites that can show content in automated sign language.

  2. In real world, millions (probably billions) of people don't have their own computers with programs specifically chosen by them - they access by school, cybercafes, work and other places. Many of them that do not allow installation of third party software - have the interpretation as a option in the web sites would be a solution.

  3. Remembering one more time: we already have a guideline (1.2.6) that links video content in web sites to necessity of signal language interpretation, and it only has no more weight in terms of priority because before it was impractical, as you well said. As today is possible with different options of software to use, we can agree that there are no more arguments to do not have this guideline as A or AA level.

Regards! edson rufino de souza edson.r.souza@ufes.br edson.rufino@gmail.com

GreggVan commented 6 years ago

Here you

On Dec 8, 2017, at 10:49 PM, edson rufino de souza notifications@github.com wrote:

Thank you one more time, @GreggVan https://github.com/greggvan , and I am sorry by delay in my response.

Continuing our conversation: the proposition that content be accessible is sufficient to deaf people access information with their own sign language interpreters software make sense only in theory. Unfortunately, this proposition has many problems to be considered true in real world, as described below.

For example, we can see that is common that today almost all operating systems (desktop/mobile) have internal screen readers with voice synthesizer. This is not true to sign language avatar interpreters software. Thus, these software that are affordable technologies but at this moment less known by many users that could be benefited by them by installation in their computers. It shows that today is important to have web sites that can show content in automated sign language.

Well - this was not true until just recently. But even then, this doesn’t seem to change the argument that that function should be done by the User Agent. Just that it should be in mainstream user agents rather than add on user agents like AT.

I suspect that as soon as good sign language interpretation is available, it will be available on every OS just like all the other access features are appearing.

Which brings me to my main question. You keep referring to “automated sign language”. I was not aware that there was good automated sign language - nor that it was available in all languages.

what automated sign language are you referring to that is good enough that consumers will not complain? is it available in all languages?
We can’t create WCAG rules that can only be met in some languages.

In real world, millions (probably billions) of people don't have their own computers with programs specifically chosen by them - they access by school, cybercafes, work and other places. Many of them that do not allow installation of third party software - have the interpretation as a option in the web sites would be a solution.

Those billions are in different languages - even within the same country. So how many sign languages would it need to be provided in?

and again - it seems that it would be more logical for those few computer operating systems to include it rather than require every mom and pop website to buy automated sign language software or not be able to meet WCAG.

Remembering one more time: we already have a guideline (1.2.6) that links video content in web sites to necessity of signal language interpretation, and it only has no more weight in terms of priority because before it was impractical, as you well said. As today is possible with different options of software to use, we can agree that there are no more arguments to do not have this guideline as A or AA level.

Yes - but that is at AAA - which is never required. It is advisory and says that this would a good thing to do if you can. It is not required. 1.2.6 could never bet at A or AA level for all of the reasons above and more.

Also - the reason it is there for AV is that it is hard to pull all the dialog out of a movie and feed it to a sign language avatar. While 1.1.1 at level A ensures that it is all available when it is text.

I understand the problem — and have actually provided funding from my grants to work on early sign language avatar work to advance it. We also had a joint RERC with Gallaudet University working on issues faced by deaf people and still do today. So I am very interested and committed to these problems. IT is just that requiring every web page to have a parallel presentation in sign language is the wrong way to solve the problem. Separate but equal is always a bad idea as we have seen. Much better to find a way for deaf people to access the original content of the page. And the way to do that is with a user agent approach — not a parallel content approach.

That will work for all pages that will work for all languages (with the same restrictions as with all AT — there needs to be AT in that language). But we cannot have Web Page Author requirements that cannot be met in all languages.

Gregg

Regards! edson rufino de souza edson.r.souza@ufes.br mailto:edson.r.souza@ufes.br edson.rufino@gmail.com mailto:edson.rufino@gmail.com — You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/w3c/wcag21/issues/508#issuecomment-350421513, or mute the thre https://github.com/notifications/unsubscribe-auth/AJph3mi9PyzbXzyFwVZxJ31DwIii2zq_ks5s-gNFgaJpZM4PzUe4

awkawk commented 6 years ago

Thank you for your comment. Since we are restricted from changing WCAG 2.0 SC and definitions in WCAG 2.1 I am marking this issue to defer so we can discuss it at the right time and closing it.