w3c / wcag3

WCAG 3
https://w3c.github.io/wcag3/
Other
37 stars 8 forks source link

Add Outcome about Marking Up AI #90

Open SuzanneKTaylor opened 1 month ago

SuzanneKTaylor commented 1 month ago

The Introduction to the May 2024 draft of WCAG 3.0 asked "What outcomes needed to make web content accessible are missing?"

The idea of indicating something that is an AI is included in:

Indicate 3rd party contentEXPLORATORY Third party content (AI, Advertising, etc.) is visually and programmatically indicated.

But, AI might not always be a 3rd party. Since an AI can provide unlimited attention to any one user unlike anything we've seen before, tools should be able to block, flag, or warn users/guardians/educators about AI. An outcome like this might help:

Identify AI AI (chat, avatar, voice) is programmatically marked as AI.

Another benefit to this is that unmarked AI can be considered a bad practice regardless of the details of the AI, in some cases eliminating the necessity to prove that whatever the AI was doing is a bad practice, which could be much more difficult to pinpoint and address quickly.

GreggVan commented 1 month ago

I think we should think of AI in many ways like we would think of CODE. Code isn't accessible or inaccessible itself. It is what is created with/by it. Accessibility is about interface — not function or origin. So AI is not accessible or not. and we don't need guidelines about AI (for accessibility) - -but just on what it creates - which is what we are already doing. Other than AI fairness or bias -- (which are not accessibility - but important) -- what exactly are we seeing as an interface issue with AI?

So other than a fear of AI (which we should all have a healthy dose of) what exactly is the problem we see it creating for accessibility of web content?

SuzanneKTaylor commented 1 month ago

This outcome would not be for AI-generated content, it would be for an AI that is acting the way a human acts. For example, it could be a name in IRC talking to you, or it could be on Zoom as an avatar talking in a meeting. For a while, most people will be able to identify the "bot," but people with disabilities may have fewer hints. For example the avatar's hands on Zoom would look a bit off at the moment, but a blind user wouldn't have that extra hint. Eventually, no one will be able to consistently tell, and this is a problem because all sorts of pranks, phishing schemes, etc could take place. (For example, someone could find out the person they've been talking to and spending countless hours chatting with and worrying about is not real at all.)