TheLastBen / fast-stable-diffusion

fast-stable-diffusion + DreamBooth
MIT License
7.51k stars 1.31k forks source link

It give out a .ckpt file to use it locally with our own GPU? #45

Open ZeroCool22 opened 2 years ago

ZeroCool22 commented 2 years ago

Or how we can use the training model locally?

thx.

TheLastBen commented 2 years ago

you will need the diffusers script to get the model working locally : https://huggingface.co/blog/stable_diffusion

1blackbar commented 2 years ago

this also works , also he managed to bring it down to 10GB GPU https://github.com/ShivamShrirao/diffusers/tree/main/examples/dreambooth

ZeroCool22 commented 2 years ago

this also works , also he managed to bring it down to 10GB GPU https://github.com/ShivamShrirao/diffusers/tree/main/examples/dreambooth

You already tried it on Windows with WLS + Docker?

1blackbar commented 2 years ago

No xformers wont run on windows, also i wouldnt run in localy i do multiple colabs at once my man

@TheLastBen where are stored images from gradio ui when i prompt with trained weights? are they stored anywhjere on colab drive ? Also if not , can you save them and use prompt name and seed in the name or something like that ?

ZeroCool22 commented 2 years ago

No xformers wont run on windows, also i wouldnt run in localy i do multiple colabs at once my man

@TheLastBen where are stored images from gradio ui when i prompt with trained weights? are they stored anywhjere on colab drive ? Also if not , can you save them and use prompt name and seed in the name or something like that ?

According to this repo we can train on Windows with WLS + Docker...

https://github.com/smy20011/efficient-dreambooth

Tensorflow with GPU on Windows WSL using Docker:

https://www.youtube.com/watch?v=YozfiLI1ogY

1blackbar commented 2 years ago

Oh give it a shot but i dont want to block my own gpu

ZeroCool22 commented 2 years ago

Oh give it a shot but i dont want to block my own gpu

Explain me that, how will it block, what you mean, could damage it?

1blackbar commented 2 years ago

i cant do anything on my GPU with same speed if its training , come on :) Also on 3 or more colabs at once i can run trainings like crazy

About the paths and all, This is perfect and path is ready , IMO it should be the same in this repo colab fo dreambooth

`import torch from torch import autocast from diffusers import StableDiffusionPipeline from IPython.display import display

model_path = "/content/gdrive/Shareddrives/Dysk1blackbar/AI/models/krystian" # If you want to use previously trained model saved in gdrive, replace this with the full path of model in gdrive

pipe = StableDiffusionPipeline.from_pretrained(model_path, torch_dtype=torch.float16).to("cuda") g_cuda = None`

From this colab https://colab.research.google.com/github/ShivamShrirao/diffusers/blob/main/examples/dreambooth/DreamBooth_Stable_Diffusion.ipynb

TheLastBen commented 2 years ago

You can run it under windows using docker, preferably under nvidia docker image

ZeroCool22 commented 2 years ago

You can run it under windows using docker, preferably under nvidia docker image

So, i must activate the Virtualization in my BIOS and install this, correct?

https://www.youtube.com/watch?v=YozfiLI1ogY

TheLastBen commented 2 years ago

I don't think so, just install docker and pull this image : nvcr.io/nvidia/pytorch:22.08-py3