Closed caldwecg closed 9 months ago
I think torch 1.x and torch 2.x are both ok once it is compatible with your CUDA. Our initialization function inherits from StableDiffusionXLControlNetPipeline. Is your base_model downloaded successfully?
The error stemmed from the cuda call to this function:
def cuda(self, dtype=torch.float16, use_xformers=False):
self.to('cuda', dtype)
if hasattr(self, 'image_proj_model'):
self.image_proj_model.to(self.unet.device).to(self.unet.dtype)
if use_xformers:
if is_xformers_available():
import xformers
from packaging import version
xformers_version = version.parse(xformers.__version__)
if xformers_version == version.parse("0.0.16"):
logger.warn(
"xFormers 0.0.16 cannot be used for training in some GPUs. If you observe problems during training, please update xFormers to at least 0.0.17. See https://huggingface.co/docs/diffusers/main/en/optimization/xformers for more details."
)
self.enable_xformers_memory_efficient_attention()
else:
raise ValueError("xformers is not available. Make sure it is installed correctly")
I added "return self" to the end and it stopped returning the nonetype. Not sure if this was a problem unique to me but adding the return at the end fixed it
Oh, I see. You should do like this
pipe = StableDiffusionXLInstantIDPipeline.from_pretrained(
base_model,
controlnet=controlnet,
torch_dtype=torch.float16
)
pipe.cuda()
Yup! Thank you!
I am able to get things running smoothly until I create the pipe from pipeline_stable_diffusion_xl_instantid.py. I believe this errors out somewhere when the following is called:
pipe = StableDiffusionXLInstantIDPipeline.from_pretrained( base_model, controlnet=controlnet, torch_dtype=torch.float16 ).cuda()
but the error is not caught and the pipe ends up as None. I am thinking it is a versioning error with my cuda/xformers. Could someone please provide the necessary versions for the following:
xformers cuda torch
Thanks in advance