-
Hi, thanks for presenting this interesting paper.
Table 2 shows that Show-o achieves impressive generation ability (better than SDv1.5) with a much smaller training scale.
Could you provide some…
-
The laion website provides embeddings and parquets which tie an embedding at an index in the array to its associated metadata. In theory the clip output and the laion embedding for the same image shou…
-
### Description
I am encountering a `RuntimeError` when trying to load a checkpoint using the CLAP model on a GPU cluster. The error message indicates that there are unexpected key(s) in the `state_d…
-
It is recommended to download the required online files and make a link to the model list for easy download
Error: Could not load the stable-diffusion model! Reason: Can't load tokenizer for 'laion/C…
-
First, thanks for you sharing this interesting model .I have some issues, when I run the finetune_real.sh, it seems that the connection to https://knn5.laion.ai/knn-service does not work right now. I…
-
### Question
Hi,
Which version of LAION is used for pre-training of the MLP/projection layers? Is it 400M, 2B or 5B?
Thank you.
Regards,
Yash Patel.
-
Hello, I'm considering training the AudioLDM model using my own dataset, and I'm curious about the necessity of training the CLAP component along with the LDM and VAE.
From my understanding of the …
-
Hi,
I am having trouble deriving synonyms as per the prompt “What are some common ways of referring to {concept}?” in the paper, the generated synonyms with GPT-4 are different from the .json in ana…
-
Using the default workflow, default parameters, an error occurs at runtime.:
Error occurred when executing UltraPixelProcess:
Error(s) in loading state_dict for CLIPTextModelWithProjection:
siz…
-
### Question
Thanks for your work. where can I download blip_laion_cc_sbu_558k ?