TheLastBen / fast-stable-diffusion

fast-stable-diffusion + DreamBooth
MIT License
7.56k stars 1.31k forks source link

I can't start SD in A1111 notebook ("None" in local URL) - vast.ai #2276

Open elo0i opened 1 year ago

elo0i commented 1 year ago

Captura de pantalla 2023-06-26 103104

I'm getting this error (screenshot above) when executing this cell as I always did (I'm using vast.ai, on runpod works fine)

Start Stable-Diffusion

User = "" ​ Password= "" ​

Add credentials to your Gradio interface (optional).

-----------------

configf=sd(User, Password, model) if 'model' in locals() else sd(User, Password, "");import gradio;gradio.close_all() !python /workspace/sd/stable-diffusion-webui/webui.py $configf 2023-06-26 08:27:35,209 - ControlNet - INFO - ControlNet v1.1.227 ControlNet preprocessor location: /workspace/sd/stable-diffusion-webui/extensions/sd-webui-controlnet/annotator/downloads 2023-06-26 08:27:35,283 - ControlNet - INFO - ControlNet v1.1.227 Loading weights [4c86efd062] from /workspace/sd/stable-diffusion-webui/models/Stable-diffusion/SDv1-5.ckpt Running on local URL: https://none-3000.proxy.runpod.net/ ✔ Connected Startup time: 5.1s (import torch: 1.5s, import gradio: 0.7s, import ldm: 0.2s, other imports: 1.5s, load scripts: 0.6s, create ui: 0.3s, gradio launch: 0.2s). Creating model from config: /workspace/sd/stable-diffusion-webui/configs/v1-inference.yaml LatentDiffusion: Running in eps-prediction mode DiffusionWrapper has 859.52 M params. Applying optimization: sdp... done. Textual inversion embeddings loaded(0): Model loaded in 3.8s (load weights from disk: 1.3s, load config: 0.2s, create model: 0.5s, apply weights to model: 0.7s, apply half(): 0.7s, move model to device: 0.4s).

TheLastBen commented 1 year ago

the notebooks are designed for runpod, they are not available on vast.ai

elo0i commented 1 year ago

the notebooks are designed for runpod, they are not available on vast.ai

Thank you soo much for clarifying, am I stupid or I remember using this on vast.ai a few weeks ago? There is some old notebook or a way to execute this on vast ai? I have a custom script based on the RunPod notebook that only fails when creating a local URL for gradio webui (when instance is created on vast.ai). Do I have options to inference this on vast.ai? I don't even need the webui since I just want to automate the generation of images given a prompt via a python script that is executed in the instance.

My idea was, create a instance on vast.ai with fast-sd-2.1.0 image, copy my inference script (based on notebook), execute the script and receive x images on /workspace/... . I already did that with dreambooth and I can do it with a script without problem but I can't find a way to inference with this "method" on vast.ai

TheLastBen commented 1 year ago

do you want to use it as an API ?

elo0i commented 1 year ago

do you want to use it as an API ?

Yes!! (I don't know if I have to use --api or --nowebui but I use --api). I have my script working on runpod and works very well but I need to do it in vast.ai given the limitations of runpod API. I just need to run SD in some way on the instance and then generate images via commands in the script (the best solution I found is launch with --api)

EDIT (Just to clarify) Oh and I have the same error both when i do it with the notebook I show in the screenshot and with the python script based on that notebook, I did it testing in runpod because I had a few $ left and I was not understanding why I was getting the "None" in vast.ai until I tried the normal way using your notebook and not "my" script and then realized something was wrong and then you gently informed me about the limitations

TheLastBen commented 1 year ago

there is a runpod API based on this repo's training method https://docs.runpod.io/reference/dreambooth-sd-v15

elo0i commented 1 year ago

there is a runpod API based on this repo's training method https://docs.runpod.io/reference/dreambooth-sd-v15

Yes I know but that's not the problem, I already have my own Dreambooth Training API running on vast.ai and it works well, the problem is I need a INFERENCE "script" to generate x images with x prompt. I dont even need API "mode" but the thing is I want to do it with the custom trained model of each user of my web, I already know how to manage that but I am not being able to run inference on vast.ai with this repo (I would love to do it with your repo since it is the one that gives me better results). My doubt is, can I create a instance on vast.ai with your docker image (fast-sd:2.1.0), then execute a python script that generates x images with the prompt specified in the script (I don't need api or webui but launch with --api and then doing pre-written request to the api at the end of the script is the best solution i found)

elo0i commented 1 year ago

Maybe the problem is that I am stupid and I am using repos created for only webUI use and maybe i should aim for more basic SD repos since I don't need webui or a running API (each time the user requests a image a new instance would be created with his own trained model)

elo0i commented 1 year ago

Oh and another doubt. I tried adapting the runpod one to vast.ai, would be easier to adapt the colab pro notebook to work on jupyter in vast.ai? Maybe I am saying something stupid but on Colab the URL generated is fully gradio and has nothing to do with runpod right?

elo0i commented 1 year ago

Or someone knows any other repo to do what I say above?