borisdayma / dalle-mini

DALL·E Mini - Generate images from a text prompt
https://www.craiyon.com
Apache License 2.0
14.75k stars 1.2k forks source link

improving biases when browsing internet for images #235

Open michaeljoelt opened 2 years ago

michaeljoelt commented 2 years ago

I searched "doctor" and it returned the following: Screenshot_20220612-104224

for when general human images are being searched, it seems it should attempt to include more diversity. so instead of only looking up "doctor", it should also look up: "black doctor", "brown doctor", "asian doctor", "woman doctor", etc.

davidcollini commented 2 years ago

If the site is changing anything about the input, that should be made clear to the user

redelman431 commented 2 years ago

Right now there is already complaints from autism rights groups about some of the results that show up in the search engine for autism. Like with any other AI technology, bias will probably be an ongoing issue that will need continuous refinement. Perhaps we could create a separate communication channel on the website to deal with specific types of complaints so we can focus more on breaking technical issues on this github page. Perhaps when searching disabilities we should also tune the algorithm to also search for successful people with disabilities that break stereotypes. For example it may search "famous people with disability x" and shuffle them in the mix. So when searching autism you won't just get children having meltdowns, you may also get Satoshi Tajiri, Elon Musk, or Jacob Barnett.

inbound1031122735998067890

michaeljoelt commented 2 years ago

Yes, disability related biases are another great point. I have actually been noticing quite a few folks generating immature and hateful images on the discord bot.. often related to Nazis, making fun of LGBT, disabilities, etc... not sure what the route is for reporting that, if there's one? interesting how obsessed they are with these topics though... I guess.

and yeah, regarding the other comment: "weighted" searches are necessary when the "default" searches are super biased. if you want a "white doctor" specifically, you can request it.

also, I just searched race stats for doctors and learned less than 60% of them are white, so that should be reflected in these results for accuracy too, even on just a general "doctor" search, don't you think? trying to make the AI more accurate AI, y'know? :P

tx46 commented 2 years ago

I don't think it's feasible to address this in the scope of generating images using AI.

How can we tell what is a bias, what isn't a bias, what is a correct counter-bias and what is an incorrect counter-bias?

This is a deeply psychological issue and it cannot be addressed properly by telling the model to generate at least 50% female colored doctors - it does not solve the issue of the claim that the model has a bias because it is generating white male doctors. Here are a few things to think about:

  1. The claim that generating white male doctors is a bias when real-life society consists of 90% white male doctors is false. There might be a bias, but the bias is not in copying reality - it is deep within the structures of society itself, assuming it is even there to begin with (how would we even know!?).

  2. There is no way to verify that introducing a counter-bias to generate 50% colored female doctors is much less biased than 90% white male doctors. What if, without our psychological biases, we would have 100% colored female doctors? What makes you think 50% is a good amount of diversity other than societal propaganda of the times we are living in? Maybe, in the future, ALL doctors will be colored. Why should it be allowed to generate any white doctors?

  3. Why should the model generate anything other than what it can observe? That gives MORE power to AI to control the direction society is going in, not less. 50% of doctors are not colored females. Why do you want it to generate that? To make a point that colored females can be doctors too? That's called propaganda or brain washing. You are telling the AI to generate something that you think it should generate, to plant thoughts in the heads of whoever is using the AI. How is that any less evil than being biased or even racist?

  4. Who decides that it should be 50% colored doctors? Why can't we tell it to only generate white male doctors wearing swastiskas? Is that more or less truthful than generating 50% colored female doctors? How can we tell, or measure it?

  5. Is it important for the model to be truthful? If not, then we are in essence building a propaganda tool. That gives enormous political power to anyone deciding over what the AI gets to generate. Why do you want the AI to wield that kind of power?

  6. What if we told it to only generate images of a flat earth when generating planets? If that was the consensus political opinion, should the model be told to only generate flat earth imagery? If science says the planet is a globe, should that rule over political opinion? What if science says that there are difference between different races? Should the AI be allowed to reflect that or not?

Those are just some questions that I came up with re. this topic. It's a really difficult topic and it needs many thousands of hours of rumination by our brightest minds, not political opinions by the masses.

TL;DR: You have no f** idea of what you are getting yourself into.

DISCLAIMER: I do not know about doctor demographics and the 90% white male doctor example is only to make a point, it is not what I believe the doctor demographic to be.

patsybond172 commented 1 year ago

Why should the model generate anything other than what it can observe? like this https://github.com/patsybond172 or this https://patsybond172.github.io/kitchen-cabinets