Closed ataa closed 1 year ago
- I get this error.
It is not an error, it is only a warning. I have not looked into it as I have not noticed any issue related to this warning so far.
- After generating the image, there's no way to free up GPU memory and getting cuda memory error.
I have not run out of memory when using xformers
. However, if the installation of xformers
was skipped or if it failed, then:
- model_id = "stabilityai/stable-diffusion-2"
+ model_id = "stabilityai/stable-diffusion-2-base"
- num_images = 4
+ num_images = 1
when trying to generate more high resolution images.
In any case, don't run the whole notebook repeatedly, only run the last cell if you want to generate more images:
prompt = "A pikachu fine dining with a view to the Eiffel Tower"
images = pipe(
prompt,
num_images_per_prompt=1,
guidance_scale=9,
num_inference_steps=25,
height=image_length,
width=image_length,
).images
media.show_images(images)
Thank you for your prompt reply,
Wow, this is a beautiful picture!
Sadly, I don't have an answer for your issue with clearing the GPU memory. All I can think of would be to "restart the runtime".
I managed to find a quick and dirty solution:
!ps -aux|grep 'python3 -m ipykernel_launcher' | awk '{print $2}' | xargs kill -9
then Ctrl-F9
. So it does not re-download dependency packages again.
Also found the max resolution for my colab free plan,
num_images = 1
guidance_scale=20,
num_inference_steps=20,
image_length = 1400
Beautiful picture! And thanks for sharing your workaround! 🤗
Thanks for making this.
I have two issue:
I get this error:
But I was able to generate images without any issue.
After generating the image, there's no way to free up GPU memory and getting cuda memory error when trying to generate more high resolution images. someone suggested this:
and it did empty the gpu memory but somehow detach the cuda device from the notebook!