openai / dalle-2-preview

1.05k stars 128 forks source link

Explicit Content context #9

Open JacobKilner opened 2 years ago

JacobKilner commented 2 years ago

I think something that needs to be kept in mind is that not all explicit content is created equal.

Using the AI to fake & publish hateful content towards certain groups or creating fake, defamatory images of real people are one thing, but I think there needs to be consideration taken towards contexts where explicit imagery wouldn't be harmful & ultimately serve a positive, meaningful or functional purpose.

Broadly speaking, violence & sexuality are fundamental themes in art across history. Art often serves as an exploration of human nature, and thus, our dichotomous capacity for both love & war are represented in the pieces we create. We wouldn't put pants on Michael Angelo's David nor would we remove the disturbing violence & sexual content from a film like The Exorcist.

Let's look at the realm of concept art, for example.

If someone is creating a horror film, where sex & violence often play a role, the AI would naturally need the ability to represent these graphic scenes accordingly. Typically, a creation of this sort would be kept private until the films release (or at least, until the marketing phase), so there would be little to no harm in art of this sort being created, provided the film itself is made with the cast & crew's safety in mind.

I do see the necessity to clamp down on this content in these earlier research phases, however, while finer control is being developed for the AI's content creation systems. That being said, I do believe that as this technology improves & becomes readily accessible, there will come a time where the option for explicit content will become desirable. Not for abusive purposes, but to unlock the full potential of the system as an artistic tool.

I agree that it is beyond necessary to ensure that this system cannot be abused, but I also believe that we shouldn't throw the baby out with the bathwater & consider non-abusive, artistic uses of explicit content, as this technology becomes more accessible.

It's a balancing act, but I ultimately do not believe that denying users access to the ability to explore these fundamental aspects of the world & humanity will be beneficial in the long-term. The systems need to improve to minimize abuse, but that should be an early stage safety measure, rather than permanent policy.

justaguywhocodes commented 2 years ago

Maybe it is possible to develop a ‘reasonable person’ standard in regards to explicit content filter guidelines? Newbie here personally but I am eager to help find ways to explain decision making that non-technical people would understand.

Rto12 commented 2 years ago

That realm of "art" shouldn't be considered for this type of A.I., seeing how "that" kind of content is everywhere, with the, all to similar, underlying themes. Meaning, no meaning, with context or none. The A.I. holds to much potential for abuse, and there will be abuse. I, for one, believe the OpenAI developers exercised discretion and weren't dubious on this matter to allow exploitable material to taint the prospective of this A.I.'s useful data. I believe that form of explicit "art" should be depicted by the people who seek that desire rather than depend on this highly advanced A.I. to visualize that repetitive theme.

JacobKilner commented 2 years ago

That realm of "art" shouldn't be considered for this type of A.I., seeing how "that" kind of content is everywhere, with the, all to similar, underlying themes. Meaning, no meaning, with context or none. The A.I. holds to much potential for abuse, and there will be abuse. I, for one, believe the OpenAI developers exercised discretion and weren't dubious on this matter to allow exploitable material to taint the prospective of this A.I.'s useful data. I believe that form of explicit "art" should be depicted by the people who seek that desire rather than depend on this highly advanced A.I. to visualize that repetitive theme.

I respect your concern about abuse & believe to be valid. There should be barriers to entry when it comes to explicit content, and plenty of safeguards to ensure that any abusive content can, at the barest of bare minimums, be detected & tracked back to the source & dealt with accordingly.

As I said in my original message, however, we shouldn't ignore the non-malicious utilities this AI could have.

My go-to example is the concept art for a horror film. Independent filmmakers with little to no drawing skill would have great utility for this AI, as it would circumnavigate the need to rely on an expensive artist.

Again, I'm not saying that this program should be allowed to run rampant without safeguards (requirements for automatic submissions to a shared database, paywalls & binding legal consequences for abuse), but we can't deny that there is plenty of utility for explicit material that are non-abusive, non-malicious & may well have genuine artistic merit.

This isn't a black & white issue, ultimately, and I think we need to treat it as such.

Rto12 commented 2 years ago

It's still a critical matter to even favor. Luckily, for now, their content policy has officially determined the restrictions that are causes for abuse, which I find well justified. It is obscure if their occupied on the aforementioned safeguards, if they even deliberate on explicit content, that with them deploying the program, already.

Furthermore, the developers judgement have established these restrictions, adequately aware of what intent lies beforehand. It would be appalling to have an unprecedented, tweakable systematized variable conveniently attained and exploited in order to construct explicit content, and share that exploit to media. The offensiveness that lie ahead, if those safeguards are corrupted, which a breach can be possible.

Granted, this program can cause a lot of gifted assets to be obsolete, particularly digital artists, for content creators who imply incapable of artistic talent, to easily conceive simulated conception, from the program, for more appeal on their platform, as instantly accessible. Provided that those content creators depend so heavily on suggestive or gore-ish ideas, then those would need to achieve inspiration from other resources, or they can utilize an anatomy course. Seems that Dall-e 2 is going to be an effective program, particularly if it's utilized for appeal fixating on suggestive ideas. Given the present desensitization on media resulting in why communities have that sort of desire for this A.I., and are disregarding the reason for the regulation on signifying content, speaks sufficiently of an instance, alone, the reason the restriction should stand.

With all due respect, but I reiterate again, that I am resolutely against utilizing this influence for portraying human nature. However, to each it's own.