Open AhmedAbdellaa opened 1 year ago
Likely the issue is that the xformers library installed by the notebook is compiled for T4 and A100 GPUs, not for a 3060: https://github.com/brian6091/xformers-wheels/releases/tag/0.0.15.dev0%2B4c06c79
Try without using xformers. You might run out of memory, in which case you might want to remove --train_text_encoder
same, colab is fine, I can get good results, but when I shift to local machine with WSL, the results are bad and seems the trainning is not working at all. when I output the intermediate results, every time I get the same pictures. Do you have any idea how to resolve that?
The problem is with xformers, by following #631 updating the xformers solves my problem
Now that fb released xformers 0.0.16, they are available as wheels on Pypi, so can you try pip install -U xformers
Thanks a lot, it works fine now after updating xformer but its maximum train resolution is 265 with my 12GB vram RTX 3060
should work with 512, I'm using rtx 3060 as well, did you follow the colab notebook?
Yes same as notebook but gpu run out of memory
Describe the bug
When I train the notebook on colab it output good result similar to the images I trained on But when I trained it on my local machine with GPU Nvidia 3060 it produce a very bad results not similar at all to the image although I installed all libraries as same as to colab but I don't know why the result is different
Reproduction
No response
Logs
No response
System Info
diffusers' version: 0.11.1 Platform: Linux-5.15.0-58-generic-x8664-with-glibc2.29 Python version: 3.8.10 PyTorch version (GPU?): 1.13.1+cu116 (True) Huggingface hub version: 0.11.1 Transformers version: 4.25.1 Using GPU in script?: nvidia rtx 3060 Using distributed or parallel set-up in script?: no