CompVis / stable-diffusion

A latent text-to-image diffusion model
https://ommer-lab.com/research/latent-diffusion-models/
Other
68.5k stars 10.18k forks source link

Ethical Issues #557

Open ghost opened 1 year ago

ghost commented 1 year ago

What is going to happen to art?

You will have heard of the controversy around AI art, with artists claiming that AI is stealing their work.

If you look at the following sources, you can see for yourself that this is true, though you should do your own research as well.: techcrunch.com nbcnews.com

And yet, many people choose to ignore this fact. Why? Because they think it's pretty neat that they can produce art so quickly and easily, with very little effort involved. They can even sell it themselves. And so, it is easy to ignore everything else when art has become so convenient to make.

But why should we matter? I'm not an artist, and neither are you (probably), so what difference does it make? Surely this is just a sacrifice that has to be made so that AI and technology can advance, for the sake of mankind?

Wrong.

Let's take a look again at the advantages of AI Art:

If that hasn't made it obvious, this is what has happened:

We are selling out Art itself

Art is becoming a mass-produced good. It is no longer a reflection of the soul, or a medium to express your emotions, but rather a tool to make money.

Have a read of this article - time.com. Ammaar Reshi 'created' a children's book of his own in one weekend and published it on Amazon. Now, the problem here isn't with Reshi - who, from a brief look at his Twitter, seems to be alright - but rather with the product itself. It is a book generated entirely by AI, with art copied from artists without their consent. And it has been sold, for money.

Whilst you may not think this is such a big problem right now, it will almost certainly be one in the future. What happens when the market is flooded with poorly made, rushed novels, paintings and stories? Things are already bad enough, with the likes of Disney and other corporations churning out movies with only money in mind. They've made it hard enough to get your hands on a good movie. And if all the smaller creators follow suit using AI, we'll have nothing left.

How could we live in such a world?

If you're anything like me, you're interested in AI art because you've always been in books and tv shows, etc. and have always wanted to create something similar yourself, but never had the time or the talent to do so. Remember how those things made you feel, what they inspired in you. Do not let art like that die.

So, to go back to one of the questions we started with, is this a necessary sacrifice for technology and humanity to develop? No, because the technology we create should develop alongside humans, and help them, not smother them. If we allow this to continue, then sure, AI will become more complex and powerful, but will it be beneficial to humanity, or just a few rich people on the planet?

What can we do about it?

If there were an easy answer to this question, then I would have done it already. The best thing you can do is speak out, and raise awareness of the issue. Remind people that AI should be used to aid humanity, not exploit them for money.

Try and protest to get laws passed protecting artists' rights. Tell all your friends about this. Write stories about it. Get AI developers to work on the issue. If you're on this website, you're likely a programmer yourself, so make sure ethics are at the forefront of any programs you make.

I know that this isn't the proper place to post this. But I don't have anywhere else to post this where enough people will see it. And people need to see it.

So, once again, do not let art die. Even though we are not artists, it is a fight we need to fight.

Goodbye.

breadbrowser commented 1 year ago

ai will never be perfect. Great examples are airplanes and cars and every text-to-image model not knowing what a counter-ram is. I have disproved your thought.

ghost commented 1 year ago

Sure, AI isn't perfect and does not know what it is creating. That is why it is up to us humans to control what we can be done with AI. But with unmoderated open-source models that everyone can use and make money from, that isn't possible. One thing I want to make clear is that I am not against AI art or AI, but am rather pushing for the responsible use of such technologies.

So actually, unless I've misunderstood what you've been trying to say, your statement reinforces my point. Please feel free to expand or develop your point if you feel like I haven't covered what you were asking about.

Thanks.

slymeasy commented 1 year ago

That is why it is up to us humans to control what we can be done with AI.

It's amazing to me that we had people from Baidu expressing the need for this technology to remain open to the public and it's liberal Americans who are saying it needs to be closed down and controlled by big monopolistic organizations like Google (they literally just wanted to turn it into an overpriced alternative to stock photos). DALE-2 has been out for a long time and there is a reason the people who use it aren't very diverse in terms of the demographics and income. It's a bad deal, doesn't make economic sense for anybody that has to watch what they spend money on.

Anybody recommending that sort of future for generative art just flat out does not care about how the majority of people in the world live and don't care about increasing accessibility for people.

The open model, created competition that kept Midjourney honest, forced them to improve, and provided a lot of cheaper services that were more accessible to the general public.

I'm seeing a lot of people who don't normally get the opportunity to participate in this sort of bleeding edge tech making generative images. I'm seeing a lot of Afrocentric images, sneaker art, graffiti, people who have cultural ties to third world countries looking to express themselves using generative images that reflect their culture, and so on.

These "cryptobros" that I see artists complaining about so much are poor young people. You go to their crypto pages and they're lucky to have sold $25 worth of art in the last 5 months. And then they buy art from other people so it's like they are doing trading cards or something. You tell them that they won't get rich doing that and they flip out. They are clinging on to this idea that that income will eventually make them whole, which is terrible that people would have so little hope that they would spend their time on something like that.

The people crafting these false narratives against generative art are influential bluechecks, using their web of influence to push a false narrative about "stealing". If the courts decide that the images can't be trained on that's fine, but as of right now there is no legal argument for training being the same as copying and they are cherry picking and using extreme setting to come up with instances of "overfitting".

Ai training on artwork is no different than a video of someone working in a factory being used to train a robot to do their job.

If you want to make a humanistic argument about ai taking jobs I'll accept that, just be consistent. Don't just have a sense of urgency when it's trendy/sexy and bluechecks are pushing it on Twitter.

You want to make a humanistic argument, then protect all workers.

These same people were telling people who lost their job to "learn to code" years back, and just last year were flippantly mentioning "low skilled workers" like Truck Drivers, fast food workers, and mechanics who's jobs they thought would be threatened by ai much sooner than theirs.

It's very likely that they will not get the precedent set in the form of some sort of legal judgment that they are looking for when it comes to training. Not that they do or don't have a point. I just believe the courts will not side against Facebook, Google, and the World Economic Form and do something to hurt the Metaverse.

It's subjective, all you can do is just follow the law whatever way it goes, but right now there is no law on training data because there is no copyrighted material stored in the models, no images. So they have to come up with a new law, but until they do that these moral arguments are pretty ridiculous. I sat there and watched someone from the US copyright office on the Conceptual Artists Association townhall tell them that they can't copyright a style.

They thought that Stable Diffusion was creating composite work (Cut-n-paste). If Stable Diffusion spits out something that's similar enough to a copyrighted work then normal copyright would apply, but they are claiming that a photorealistic generative image of a dog is immoral because there is artists artwork "in the model" that was used to create that dog, even if no artists names were used in the prompt.

They are calling them tainted models.

AI has been coming for everyone's jobs for along time now and these people thought that creatives would be the very last ones to be affected by it.

Make a humanistic push to save jobs, that's fine. You don't have to make ai art unethical to do that and it doesn't matter whether training should be banned or not. Go after the money, make it so that people can't profit off of generative art by selling crypto. Regulate businesses, tell them they can't use ai in their workflows, but this parental nonsense about protecting the public from themselves is shameful.

ghost commented 1 year ago

Sorry, I thought I had done enough research into this topic, but evidently not. It's hard to find proper information about the topic when it's many people's first response - like @breadbrowser up above- to swear at someone rather than provide a proper response.

You've brought up many points I hadn't considered. About how people from underrepresented places use AI art to express themselves. This was never meant to be speaking out against them, but rather the bigger companies that aim only to make profit that they don't even need. The main point I had wanted to make was that business should be regulated - as you had mentioned - but I guess that didn't come across as much in my original message.

And an additional note - I'm not doing this because it's trending on Twitter. In fact, I don't even have Twitter. In the future, I want to go into AI and CS, and I thought I should speak out against things I care about. The only reason I haven't done this before is that I previously thought I was too young and immature to properly speak out. But looking at this, I guess that hasn't changed. The issue of AI Art was relevant at this time, and so I thought it would be appropriate for me to speak out.

You want to make a humanistic argument, then protect all workers.

Of course, I wish I could do this. If I do get into AI in the future, this will be my main goal. But until then, the only thing a student like me can do is speak out against what they can. Seeing how this has gone, it'll probably be a while before I write anything like this again. But when I do, I'll make sure to talk about the rights of all workers. But getting anyone to do anything about it will be a very difficult task.

But, I'm sure you'll still agree, @slymeasy , that AI art does still have some issues that need to be solved. Which is why I'll keep this issue open and not delete it, so that other people can still come here and see your response.

Thanks.

BlkLodge commented 1 year ago

Cry about it, you are wrong

slymeasy commented 1 year ago

This was never meant to be speaking out against them, but rather the bigger companies that aim only to make profit that they don't even need. The main point I had wanted to make was that business should be regulated - as you had mentioned - but I guess that didn't come across as much in my original message.

See that's where they are being misleading because Google's DALE-2 is not trained on any artists work, just stock images that they paid to train on. And their push against generative images ratcheted up considerably after Stable Diffusion's model was leaked to the public. So they were never going to get anywhere attacking Google. It's just the open/public model that they have any chance of having banned.

You've brought up many points I hadn't considered. About how people from underrepresented places use AI art to express themselves.

Somebody asked me about how to do IMG2IMG SD Upscale when she was trying to compare Midjourney to Stable Diffusion and I noticed that she was making all these generative images of black women and flowers. I looked at her profile and she does have it listed that she is a "UX artist" so I know she at least has some actual talent and skill as an artist. I wouldn't be surprised to eventually see her using Invoke.ai to improve hand drawn images. Maybe that is a way that she can supplement her income in the future, I don't know.

That's something that would be prohibitively expensive on DALE-2, if Google was allowed to remain the only show in town like they were before Stable Diffusion was released out into the wild.

I know someone who uses their pension to fund his dream of running a very niche internet service business. He lost his job during the pandemic lockdown, had to take his pension/ss early and is desperate to leave some sort of business behind for his family. He doesn't have ANY technical skills himself and he is on an extremely low budget. So he goes to these outsourcing websites and hires a lot of people from third world countries.

So he would give these people FTP details for his site and they would say that the FTP connection info didn't work.

It turns out that these people were doing all this work from some sort of internet cafe, on a locked down computer, in a third world country that didn't allow them to install a program like Filezilla. He eventually came up with a solution and they got the work done.

You've got some of the brightest minds in the world working in third world countries as outsource labor and virtual assistants making what seems to us like a very small amount of money, but then you can see how that money allows to them to live a comfortable life in their native country (much better than the person who's paying them).

No one could have ever imagined these sort of scenarios when the web was invented. And so if they destroy the open model we will never know who would miss out.

I don't know what they could or couldn't do with generative art, but I know if left up to tone deaf Google that we'll never get a chance to find out. I'm sure someone from Google reached out to Techcrunch to seed the original article (the one you linked to) attacking the release of the open model. They could care less about making DALE-2 more affordable for people, but they want to take away the more affordable option.

And an additional note - I'm not doing this because it's trending on Twitter. In fact, I don't even have Twitter.

You are repeating a narrative that originated from people who are a part of the Conceptual Artist Association. That's where those articles you linked to got those talking points from. They were able to spread their perspective on the topic because they are influential/bluechecks.

That's why all the links you posted promote a lopsided narrative.

I see these exact same talking points coming out of the mouths of people like Luke from LinusTechTips and a lot of other bluechecks. That point of view is not something that someone would come to naturally. It's a political spin created because generative art threatens jobs. It's not a legitimate argument.

In the future, I want to go into AI and CS, and I thought I should speak out against things I care about. The only reason I haven't done this before is that I previously thought I was too young and immature to properly speak out. But looking at this, I guess that hasn't changed. The issue of AI Art was relevant at this time, and so I thought it would be appropriate for me to speak out.

Right, so you don't know where you would fall, professionally. You could end up working on a generative 3D algorithm for a movie studio or the Metaverse. They've got a long way to go between generative ai art and generative ai created VR experiences and movies. That's a moonshot that will require them to hire a lot of programmers. So this might be what drives them to hire someone like you. So you might be the last generation of programmers that actually get hired somewhere and end up with actual experience working for a company. The big moonshot before ai takes over.

Of course, I wish I could do this. If I do get into AI in the future, this will be my main goal. But until then, the only thing a student like me can do is speak out against what they can. Seeing how this has gone, it'll probably be a while before I write anything like this again. But when I do, I'll make sure to talk about the rights of all workers. But getting anyone to do anything about it will be a very difficult task.

As someone who has a chance to work in the ai/cs space you will eventually have conflicting interests with those people who want to halt technological advancement for humanistic reasons.

As someone who generally likes GPUs, games, and now generative art I was taken back when I heard that, under a communist co-op system that a group of soccer moms would have to vote on whether or not it would make sense for the community to invest the billions of dollars of research and development that it would take to create better GPUs and CPUs in the future (ther's no fully private industry under that system).

If you think about how much money it takes for them to develop a gaming console most people would say that money would be better spent on schools and other resources. Same with sports athletes, I've heard someone say that they should be paid, just enough to get up and down the court.

It sounds great to make these sort of performative statements about protecting people, but when you end up being the one to have to sacrifice a whole life and career as an ai/cs programmer because laws are enacted to protect a person like ME then you will change your tune, lol.

You could easily end up on the exact opposite side of this issue in 5-10 years when it threatens your ai/cs job.

But, I'm sure you'll still agree, @slymeasy , that AI art does still have some issues that need to be solved. Which is why I'll keep this issue open and not delete it, so that other people can still come here and see your response.

I don't know that it does. "Solving" sounds like making things inaccessible to most people. I definitely wouldn't have expected or wanted you to delete anything.

I'm glad we could have a civil discussion on the topic.

someordinaryidiot commented 1 year ago

A little question, isn't ai-art just a continuing trend of making art easier? A few decades ago people had to draw on an expensive canvas with expensive colours, but it became cheaper, which could be argued, pushed out a lot of smaller artists, since pictures could be produced faster. This trend continued with computers and their drawing programs. You could even argue, that 3d graphics aren't ethical since it's faster to create animations in it, which could push out hand drawn animation.

kuso-ge commented 1 year ago

When it comes to using copyrighted media, we have fair use, parody, and satire. The reason the devs behind stable diffusion isn't being sued is partly because a legal framework already exists to protect people from using copyrighted work that are transformative. It's the same legal framework that allows youtube creators to use copyrighted content such as movie clips, music, mages, titles, etc...

You take that away, and billions of youtube creators will go under.

The actual AI never contains any of the copyrighted images, but the training entails using of some copyrighted content considering the billions of images it used. Not to mention, there's already a precedence of using copyrighted content in favor of Google vs Author's Guild.

Styles cannot be copyrighted because plenty of artists has very similar styles, almost undistinguishable to others. Because of this, the AI can still be trained on particular styles even if an Artist that uses that style opt-out their work in the AI training. And people don't have to use that particular artists style because they can simply use embeddings.

Art is becoming a mass-produced good. It is no longer a reflection of the soul, or a medium to express your emotions, but rather a tool to make money.

This is a fallacy, people still make pottery, sculptures, painting, drawing, etc... for the sake of making them-this is what Art should really be. The only effect of AI in the art space isn't taking the meaning of Art but taking some people's ability to make money from art. It doesn't devalue art, in fact it makes making art more emotional and spiritual because you don't attach a monetary value to it. You are just making art for the sake of making your vision tangible whether to share it or admire it by yourself.

Everyone will be using the AI to make their visions appear in image form, it doesn't have to come through another person's interpretation.

The only real reason we use these arguments is because people are afraid the AI might take their livelihood. However, while artist might get displaced, it's not the AI that will replace them but other humans who can use the AI effectively. This is where the outrage comes from. We will say everything to devalue whatever these people make, like, the art from these people who use AI to make images is fake, unoriginal, ugly or not genuine or stolen, because we are afraid these people will replace us, not because the accusations are true-you can't prove these accusations at all, but people will still say it.

Whilst you may not think this is such a big problem right now, it will almost certainly be one in the future. What happens when the market is flooded with poorly made, rushed novels, paintings and stories? Things are already bad enough, with the likes of Disney and other corporations churning out movies with only money in mind. They've made it hard enough to get your hands on a good movie. And if all the smaller creators follow suit using AI, we'll have nothing left.

If you dabbled in the indie gaming space, you'll know that saturation of mass produced low quality project is the norm. Did that affect the ability of studios to make Triple AAA games? Of course not. Also, Why would you have nothing left? When you the user is now empowered to make your own vision? You yourself don't have to force yourself to watch the crap that disney churns out. When this technology matures enough, you yourself will be able to make your own movies, using AI to make 3D realistic models, AI to write your script and story, AI to move your characters, etc... and all of these AI's already exist albeit in it's infancy. (POINT-E, GPT-3/ChatGPT, SD, etc...). The AI simply removes the labor, but the Human is still the director.

So, to go back to one of the questions we started with, is this a necessary sacrifice for technology and humanity to develop? No, because the technology we create should develop alongside humans, and help them, not smother them. If we allow this to continue, then sure, AI will become more complex and powerful, but will it be beneficial to humanity, or just a few rich people on the planet?

If legislation to regulate AI passes, it will likely only belong to few rich people. In fact, we should be making sure it's available to everyone and not pushing regulation or restriction on its use.

hackthedev commented 9 months ago

Post it on reddit or smth this is github its kinda out of place thanks

RosyMapleMoth commented 9 months ago

GitHub is the wrong spot to post this.

but I will add that AI art being transformative with regard to fair use is not clear. the assumption of many is that it is in fact transformative but this has not been tested in court.