CompVis / stable-diffusion

A latent text-to-image diffusion model
https://ommer-lab.com/research/latent-diffusion-models/
Other
67.43k stars 10.08k forks source link

License doesn't meet the Open Source Definition #110

Open jdoe0000000 opened 2 years ago

jdoe0000000 commented 2 years ago

I saw the announcement on Twitter that Stable Diffusion has been released as open source, but the license in this repo doesn't seem to meet the Open Source Definition. More specifically, the use-based restrictions outlined in paragraph 5 seems to violate section 6 of the Open Source Definition that requires open source licenses not to discriminate against fields of endeavor.

I think it would be a good idea add some extra clarification on this issue to the readme to avoid confusion.

atypicalconsortium commented 2 years ago

I believe this is in relation to the model card:

https://huggingface.co/CompVis/stable-diffusion

I don't have the whole thing, but I did copy/paste some of it for logs, which was:

Direct Use

The model is intended for research purposes only. Possible research areas and tasks include

Safe deployment of models which have the potential to generate harmful content. Probing and understanding the limitations and biases of generative models. Generation of artworks and use in design and other artistic processes. Applications in educational or creative tools. Research on generative models. Excluded uses are described below.

Misuse, Malicious Use, and Out-of-Scope Use Note: This section is taken from the DALLE-MINI model card, but applies in the same way to Stable Diffusion v1.

The model should not be used to intentionally create or disseminate images that create hostile or alienating environments for people. This includes generating images that people would foreseeably find disturbing, distressing, or offensive; or content that propagates historical or current stereotypes.

Out-of-Scope Use The model was not trained to be factual or true representations of people or events, and therefore using the model to generate such content is out-of-scope for the abilities of this model.

Misuse and Malicious Use

Using the model to generate content that is cruel to individuals is a misuse of this model. This includes, but is not limited to:

Generating demeaning, dehumanizing, or otherwise harmful representations of people or their environments, cultures, religions, etc. Intentionally promoting or propagating discriminatory content or harmful stereotypes. Impersonating individuals without their consent. Sexual content without consent of the people who might see it. Mis- and disinformation Representations of egregious violence and gore Sharing of copyrighted or licensed material in violation of its terms of use. Sharing content that is an alteration of copyrighted or licensed material in violation of its terms of use. Limitations and Bias Limitations The model does not achieve perfect photorealism The model cannot render legible text The model does not perform well on more difficult tasks which involve compositionality, such as rendering an image corresponding to “A red cube on top of a blue sphere”

Faces and people in general may not be generated properly.

The model was trained mainly with English captions and will not work as well in other languages.

The autoencoding part of the model is lossy The model was trained on a large-scale dataset LAION-5B which contains adult material and is not fit for product use without additional safety mechanisms and considerations.

No additional measures were used to deduplicate the dataset. As a result, we observe some degree of memorization for images that are duplicated in the training data. The training data can be searched at https://rom1504.github.io/clip-retrieval/ to possibly assist in the detection of memorized images.

We currently provide three checkpoints, sd-v1-1.ckpt, sd-v1-2.ckpt and sd-v1-3.ckpt, which were trained as follows,

sd-v1-1.ckpt: 237k steps at resolution 256x256 on laion2B-en. 194k steps at resolution 512x512 on laion-high-resolution (170M examples from LAION-5B with resolution >= 1024x1024). sd-v1-2.ckpt: Resumed from sd-v1-1.ckpt. 515k steps at resolution 512x512 on "laion-improved-aesthetics" (a subset of laion2B-en, filtered to images with an original size >= 512x512, estimated aesthetics score > 5.0, and an estimated watermark probability < 0.5. The watermark estimate is from the LAION-5B metadata, the aesthetics score is estimated using an improved aesthetics estimator). sd-v1-3.ckpt: Resumed from sd-v1-2.ckpt. 195k steps at resolution 512x512 on "laion-improved-aesthetics" and 10% dropping of the text-conditioning to improve classifier-free guidance sampling. Hardware: 32 x 8 x A100 GPUs Optimizer: AdamW Gradient Accumulations: 2 Batch: 32 x 8 x 2 x 4 = 2048 Learning rate: warmup to 0.0001 for 10,000 steps and then kept constant...

Etcetcetc."

This is probably the reason why, because they are obligated to do so.

Technically? You can train your own model. The info is there. It's just going to cost you.

So this release, with the model, are restrictive, with these terms.

And yes, I'd have to agree that they are restrictive, as they are far too vague. Nonetheless, they are what they are.