Closed yanhuifair closed 1 year ago
It looks to me like you're using two different architectures. The VQ-VAE from this repo has a different architecture to the AutoencoderKL in your image. You need to use the same one in both cases.
It looks to me like you're using two different architectures. The VQ-VAE from this repo has a different architecture to the AutoencoderKL in your image. You need to use the same one in both cases.
What should I do if I want to use your research on stable diffusion, I'm an artist who doesn't know much about machine learning, thank you very much for your guidance, think you Akash Saravanan
Unfortunately I haven't really had the chance to go through the Stable Diffusion code to properly understand things there, so I don't really know. My best guess at this point would be that you would have to replace AutoencoderKL with the VQ-VAE defined in this repo.
Unfortunately I haven't really had the chance to go through the Stable Diffusion code to properly understand things there, so I don't really know. My best guess at this point would be that you would have to replace AutoencoderKL with the VQ-VAE defined in this repo.
Thank you for your reply, I'll try, thank you for creating this great library and research!
When I try to train my own sprite image, I put the image in the test, training and Val folder, the image is 512*512 size, then after I run the vq_vae.py script, I get a .pt file, but when I want to use it .pt flie to stable diffusion, I get an error, such as the image below, how can I use this vae in stable diffusion, thank you!
There are also some screenshots that I don't know if they are useful