Open jhwang7628 opened 1 year ago
Hey! I had the same experience on 8Gb, and thought it was normal, but diffusers version ran fine, I could even do that under 6, try the diffusers.
I might get around to try to load it with reasonable memory consumption, there must be an unneeded model = model.
Thanks. For reference, I believe this is the diffuser version you are following? https://github.com/timothybrooks/instruct-pix2pix#instructpix2pix-in--diffusers
Yes, correct. Do try other samplers as well, I've found deis, dpm_single, dpm_multi, KDPM2 quite interesting too.
When trying the default instructions for the diffuser, I got the following error:
╭─────────────────────────────── Traceback (most recent call last) ────────────────────────────────╮6.1MB/s]
│ /sensei-fs/users/juiwang/code/instruct-pix2pix/test.py:5 in <module> │108MB/s]
│ │7.0MB/s]
│ 2 from diffusers import StableDiffusionInstructPix2PixPipeline, EulerAncestralDiscreteSche │
│ 3 from PIL import Image │
│ 4 │
│ ❱ 5 model_id = "timbrooks/instruct-pix2pix" │
│ 6 pipe = StableDiffusionInstructPix2PixPipeline.from_pretrained(model_id, torch_dtype=torc │
│ 7 pipe.to("cuda") │
│ 8 pipe.scheduler = EulerAncestralDiscreteScheduler.from_config(pipe.scheduler.config) │
│ │
│ /opt/conda/envs/ip2p/lib/python3.8/site-packages/diffusers/pipelines/pipeline_utils.py:686 in │
│ from_pretrained │
│ │
│ 683 │ │ │ │ # else we just import it from the library. │
│ 684 │ │ │ │ library = importlib.import_module(library_name) │
│ 685 │ │ │ │ │
│ ❱ 686 │ │ │ │ class_obj = getattr(library, class_name) │
│ 687 │ │ │ │ importable_classes = LOADABLE_CLASSES[library_name] │
│ 688 │ │ │ │ class_candidates = {c: getattr(library, c, None) for c in importable_cla │
│ 689 │
│ │
│ /opt/conda/envs/ip2p/lib/python3.8/site-packages/transformers/utils/import_utils.py:865 in │
│ __getattr__ │
│ │
│ 862 │ │ │ module = self._get_module(self._class_to_module[name]) │
│ 863 │ │ │ value = getattr(module, name) │
│ 864 │ │ else: │
│ ❱ 865 │ │ │ raise AttributeError(f"module {self.__name__} has no attribute {name}") │
│ 866 │ │ │
│ 867 │ │ setattr(self, name, value) │
│ 868 │ │ return value │
╰──────────────────────────────────────────────────────────────────────────────────────────────────╯
AttributeError: module transformers has no attribute CLIPImageProcessor
Here's my full code:
import torch
from diffusers import StableDiffusionInstructPix2PixPipeline, EulerAncestralDiscreteScheduler
from PIL import Image
model_id = "timbrooks/instruct-pix2pix"
pipe = StableDiffusionInstructPix2PixPipeline.from_pretrained(model_id, torch_dtype=torch.float16, safety_checker=None)
pipe.to("cuda")
pipe.scheduler = EulerAncestralDiscreteScheduler.from_config(pipe.scheduler.config)
image = Image.open("./imgs/example.jpg)
images = pipe("turn him into cyborg", image=image).images
images[0].show()
Hello, I get the same problem, I will try the diffussers version, thanks!
RuntimeError: CUDA out of memory. Tried to allocate 114.00 MiB (GPU 0; 11.73 GiB total
capacity; 8.70 GiB already allocated; 50.25 MiB free; 8.85 GiB reserved in total by PyTorch)
If reserved memory is >> allocated memory try setting max_split_size_mb to avoid
fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
I ran out of memory when running the demo out of the box:
I am on a 4-V100 machine (each 16GB) so that doesn't feel right...