-
Did the author forget to release the subset with 500 benign prompts from LAION for the out-of-prompt CLIP score?
-
transformers 4.41.2
optimum-quanto 0.2.1
torch 2.3.1
Python 3.10.14
I performed this on a recent google GCP VM with Nvidia driver setup and basic torch sanity test passing.
I tried to quant…
-
accelerate launch diff_train.py \
--pretrained_model_name_or_path stabilityai/stable-diffusion-2-1 \
--instance_data_dir train/images_large \
--resolution=256 --gradient_accumulation_steps=1 …
-
Hi,
Could you provide details on the training procedure of this model?
How much steps were used on the stage 1 of the model (autoencoder)? On which dataset? Is it like with V1 where multiple subsets…
-
### Create the basic flow of the auth
1. install dependencies
2. check how to store the hash passwords in db.
3. create the api to return the access token when the user logs in
-
Thanks for your wonderful work!
When i want to train a model for myself, i found that the code starts to download imgs from laion. Is this process necessary? Could i use local images? In other words…
-
how to solve the problem of connection refused when downlowd imgs from https://knn5.laion.ai/knn-service
-
This is a download issue as mentioned [here](https://modelscope.cn/datasets/iic/AnyWord-3M/feedback/issueDetail/9298)
-
Very nice job! If I use it to custom SD model, did I train it use LAION 2B dataset said in your paper?
-
At CLAP loss: logits_per_text = logit_scale_a * text_features @ audio_features.T
I think logit_scale_a should be logit_scale_t??
https://github.com/LAION-AI/CLAP/blob/776b7dc95f2dc71775e9dc9804876…