daviddao / awful-ai

😈Awful AI is a curated list to track current scary usages of AI - hoping to raise awareness
https://twitter.com/dwddao
6.98k stars 233 forks source link

AI chatbot taken down after it gives ‘harmful advice’ to those with eating disorders #65

Open andy-mayne opened 1 year ago

andy-mayne commented 1 year ago

https://www.independent.co.uk/tech/ai-eating-disorder-harmful-advice-b2349499.html

KOLANICH commented 1 year ago

I don't think any malfunctioning AI fits here. Of course ones who want to treat their disorders should consult someone who is competent in the topic or learn themselves, not just rely on a bot. I am pretty sure that bots giving health advice come with warnings encouraging users to do that. It is also not a reason not to use AI to solve such problems. In fact more a problem important, more it is reasonable to make the consulting on it available to the everyone, and applying AI increases availability.

andy-mayne commented 1 year ago

I think the angle I was coming from was that the company made all its employees redundant and replaced them with an AI bot that was obviously poorly trained for the use case. If you are contacting a dedicated healthcare service, like the national eating disorder association in this example, you expect that the service you are contacting will have appropriate safe guards in place and give you at least safe advice, and whilst you can put a disclaimer saying that you should not solely rely on it, we are talking about clinically vulnerable people who may not use that information correctly.

bnord commented 1 year ago

Using AI tools that are not interpretable or explainable for this kind of health care is super dangerous. While it may expand care for some, it increases health risk for others.