I tried to run with following codes, but couldn't load pipline components and there was none error.
`import torch
from diffusers import FluxPipeline
device = (
"mps"
if torch.backends.mps.is_available()
else "cuda"
if torch.cuda.is_available()
else "cpu"
)
print("show pipe lines")
pipe = FluxPipeline.from_pretrained("black-forest-labs/FLUX.1-schnell", torch_dtype=torch.bfloat16)
print("load pipeline")
pipe.enable_model_cpu_offload() #save some VRAM by offloading the model to CPU. Remove this if you have enough GPU power
prompt = "A cat holding a sign that says hello world"
image = pipe(
prompt,
guidance_scale=0.0,
output_type="pil",
num_inference_steps=4,
max_sequence_length=256,
generator=torch.Generator(device=device).manual_seed(0)
).images[0]
print('draw')
image.save("gc-img/shanshui_pil_4_246-2.webp")`
Hi guys:
`import torch from diffusers import FluxPipeline
device = ( "mps" if torch.backends.mps.is_available() else "cuda" if torch.cuda.is_available() else "cpu" ) print("show pipe lines") pipe = FluxPipeline.from_pretrained("black-forest-labs/FLUX.1-schnell", torch_dtype=torch.bfloat16) print("load pipeline") pipe.enable_model_cpu_offload() #save some VRAM by offloading the model to CPU. Remove this if you have enough GPU power prompt = "A cat holding a sign that says hello world" image = pipe( prompt, guidance_scale=0.0, output_type="pil", num_inference_steps=4, max_sequence_length=256, generator=torch.Generator(device=device).manual_seed(0) ).images[0] print('draw') image.save("gc-img/shanshui_pil_4_246-2.webp")`
show in console: