Open ZeroCool22 opened 2 years ago
you will need the diffusers script to get the model working locally : https://huggingface.co/blog/stable_diffusion
this also works , also he managed to bring it down to 10GB GPU https://github.com/ShivamShrirao/diffusers/tree/main/examples/dreambooth
this also works , also he managed to bring it down to 10GB GPU https://github.com/ShivamShrirao/diffusers/tree/main/examples/dreambooth
You already tried it on Windows with WLS + Docker?
No xformers wont run on windows, also i wouldnt run in localy i do multiple colabs at once my man
@TheLastBen where are stored images from gradio ui when i prompt with trained weights? are they stored anywhjere on colab drive ? Also if not , can you save them and use prompt name and seed in the name or something like that ?
No xformers wont run on windows, also i wouldnt run in localy i do multiple colabs at once my man
@TheLastBen where are stored images from gradio ui when i prompt with trained weights? are they stored anywhjere on colab drive ? Also if not , can you save them and use prompt name and seed in the name or something like that ?
According to this repo we can train on Windows with WLS + Docker...
https://github.com/smy20011/efficient-dreambooth
Tensorflow with GPU on Windows WSL using Docker:
Oh give it a shot but i dont want to block my own gpu
Oh give it a shot but i dont want to block my own gpu
Explain me that, how will it block, what you mean, could damage it?
i cant do anything on my GPU with same speed if its training , come on :) Also on 3 or more colabs at once i can run trainings like crazy
About the paths and all, This is perfect and path is ready , IMO it should be the same in this repo colab fo dreambooth
`import torch from torch import autocast from diffusers import StableDiffusionPipeline from IPython.display import display
model_path = "/content/gdrive/Shareddrives/Dysk1blackbar/AI/models/krystian" # If you want to use previously trained model saved in gdrive, replace this with the full path of model in gdrive
pipe = StableDiffusionPipeline.from_pretrained(model_path, torch_dtype=torch.float16).to("cuda") g_cuda = None`
From this colab https://colab.research.google.com/github/ShivamShrirao/diffusers/blob/main/examples/dreambooth/DreamBooth_Stable_Diffusion.ipynb
You can run it under windows using docker, preferably under nvidia docker image
You can run it under windows using docker, preferably under nvidia docker image
So, i must activate the Virtualization in my BIOS and install this, correct?
I don't think so, just install docker and pull this image : nvcr.io/nvidia/pytorch:22.08-py3
Or how we can use the training model locally?
thx.