Open nemorg22 opened 1 year ago
probably a temporary outage, report back later if the issue persists
probably a temporary outage, report back later if the issue persists
The problem still remains
try a different model and see how is the speed
try a different model and see how is the speed
the same with another model, the problem is not even in low speed, but in the fact that after a long time the upload stops and does not go any further
upload it to gdrive, then use gdown to download it to paperspace
I'm also using PaperSpace
to avoid using up all my storage on models I've opted to use the /tmp/ storage that Paperspace provides.
Adding a block into the notebook above the "Model Download/Load" block to automatically download the models simplifies this process
You could also just paste the code into a .py file and run it that way.
In order for this to work you need to change the model storage location in "Model Download/Load" to /tmp/.
Path_to_MODEL = "/tmp/"
this is the code:
import os
update_apt = os.system("apt-get update")
get_aria2 = os.system("apt-get --yes install aria2")
get_model1 = os.system("aria2c --max-connection-per-server=16 --continue=true --dir=/tmp https://civitai.com/api/download/models/63847")
get_model2 = os.system("aria2c --max-connection-per-server=16 --continue=true --dir=/tmp https://civitai.com/api/download/models/15236")
#get_ghostmix = os.system("aria2c --max-connection-per-server=16 --continue=true --dir=/tmp https://civitai.com/api/download/models/59685")
If you use this code it will automatically download my models I've been using, but you can just go to civitai.com and right click the "Download Now" and choose copy-target for your URL and replace the numbers at the end for civitai. This works for VAE as well.
If you choose to download models from huggingface I've noticed aria2c doesn't resolve the filename correctly so you will need to use this syntax:
get_synthwave = os.system("aria2c --max-connection-per-server=16 --continue=true --dir=/tmp --out=snthwve_style.ckpt https://huggingface.co/PublicPrompts/Synthwave/resolve/main/snthwve%20style.ckpt")
This will output to the /tmp/ folder with a specific name
Also be sure to choose the pickle/tensor file on huggingface then right click the "download" link that appears on that page for your download URL
Probably not the cleanest way to do this task, but it gets the job done rather quickly, it adds a bit of time to machine startup but not more than 10 minutes.
Hope this helps you getting models loaded into your notebook
in the model download cell, there is Temporary_Storage = True
so the models are stored in the temporary storage by default, also, using the model link in the same cell will download it using gdown in under 1-2 minutes. I don't see a need for the overly complicated procedure.
If you only want one model it works great.
thanks, it worked.
But now I get from time to time
File Load Error for model.safetensors /notebooks/model.safetensors is not UTF-8 encoded
and when moving model to folder "/notebooks/sd/stable-diffusion-webui/models/Stable-diffusion/model.safetensors" i get this
TypeError Traceback (most recent call last) Cell In [18], line 13 3 Password= "" 5 # Add credentials to your Gradio interface (optional). 6 7 #Ngrok_token = "" (...) 11 12 #----------------- ---> 13 configf=sdui(User, Password, "", model) if 'model' in locals() else sdui(User, Password, "", "") 14 get_ipython().system('python /notebooks/sd/stable-diffusion-webui/webui.py configf')
TypeError: sdui() takes 3 positional arguments but 4 were given
@nemorg22 you are using an outdated notebook, copy the latest one from the folder "latest_notebook"
@nemorg22 you are using an outdated notebook, copy the latest one from the folder "latest_notebook"
Yes thank you. I already figured it out myself. But compared to free google colab, free paperspace renders photos noticeably slower
in the model download cell, there is
Temporary_Storage = True
so the models are stored in the temporary storage by default, also, using the model link in the same cell will download it using gdown in under 1-2 minutes. I don't see a need for the overly complicated procedure.
Is there a way to do this for ControlNet models too, as they take up so much space?
@n802de I'll see what I can do
I'm trying to upload a .safetenors model via "Upload File(s)" to a folder with models. But the file upload speed is extremely low, after an hour and a half, when 600 MB has loaded, the download stops. Internet is ok. Maybe there is another way to load the model, or increase the speed?