Open joshueoconnor opened 2 weeks ago
I think GIGO is a really good point. One of the challenges with AI is that is lies with confidence and therefore it is hard to determine. For example, autogenerated alt text will change every time it is created, even if it is the exact same image. Which one is correct? Perhaps all of them, perhaps one of them. Each time subtle details are changed yet all versions may still be helpful. So how to consider these issues would be good to add in.
Another example is live captioning, if 9 out of 10 words are accurate and paired with curated content, is this enough for an end-user to have reliable information or is the inaccuracies which are represented without warning too much of a problem for the end-user.
"As online generative AI platforms such as ‘ChatGPT’ continue to offer consumers the unrestricted ability to create text and images, including video and audio from a variety of inputs, [it is important to as the question 'what is the benefit of these advances from an accessibility perspective. This leads to a follow on question 'Are their potential harms or other challenges?. This is because accessibility as a disciple in fundamtentially a quality issue and the principle of GIBO (Garbage in Garbage out) applies very much in the context of generative AI.’]”