huggingface / diffusers

🤗 Diffusers: State-of-the-art diffusion models for image and audio generation in PyTorch and FLAX.
https://huggingface.co/docs/diffusers
Apache License 2.0
25.39k stars 5.26k forks source link

TypeError: expected np.ndarray (got numpy.ndarray) #9069

Open xiangyumou opened 2 months ago

xiangyumou commented 2 months ago

Describe the bug

import torch
from diffusers import FluxPipeline
pipe = FluxPipeline.from_pretrained("black-forest-labs/FLUX.1-dev", torch_dtype=torch.bfloat16)
pipe.to("cuda")
prompt = "A cat holding a sign that says hello world"
# Depending on the variant being used, the pipeline call will slightly vary.
# Refer to the pipeline documentation for more details.
image = pipe(prompt, num_inference_steps=4, guidance_scale=0.0).images[0]
image.save("flux.png")

with this code, it report the error as following:

 (flux) xiangyu@gpu06:~/st/flux$ python gen.py 
Loading pipeline components...:   0%|                                                                                                  | 0/7 [00:00<?, ?it/s]Traceback (most recent call last):
  File "/scr/user/xiangyu/flux/gen.py", line 4, in <module>
    pipe = FluxPipeline.from_pretrained("black-forest-labs/FLUX.1-dev", torch_dtype=torch.bfloat16)
  File "/home/user/xiangyu/.conda/envs/flux/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py", line 114, in _inner_fn
    return fn(*args, **kwargs)
  File "/home/user/xiangyu/.conda/envs/flux/lib/python3.10/site-packages/diffusers/pipelines/pipeline_utils.py", line 876, in from_pretrained
    loaded_sub_model = load_sub_model(
  File "/home/user/xiangyu/.conda/envs/flux/lib/python3.10/site-packages/diffusers/pipelines/pipeline_loading_utils.py", line 700, in load_sub_model
    loaded_sub_model = load_method(os.path.join(cached_folder, name), **loading_kwargs)
  File "/home/user/xiangyu/.conda/envs/flux/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py", line 114, in _inner_fn
    return fn(*args, **kwargs)
  File "/home/user/xiangyu/.conda/envs/flux/lib/python3.10/site-packages/diffusers/schedulers/scheduling_utils.py", line 157, in from_pretrained
    return cls.from_config(config, return_unused_kwargs=return_unused_kwargs, **kwargs)
  File "/home/user/xiangyu/.conda/envs/flux/lib/python3.10/site-packages/diffusers/configuration_utils.py", line 260, in from_config
    model = cls(**init_dict)
  File "/home/user/xiangyu/.conda/envs/flux/lib/python3.10/site-packages/diffusers/configuration_utils.py", line 653, in inner_init
    init(self, *args, **init_kwargs)
  File "/home/user/xiangyu/.conda/envs/flux/lib/python3.10/site-packages/diffusers/schedulers/scheduling_flow_match_euler_discrete.py", line 76, in __init__
    timesteps = torch.from_numpy(timesteps).to(dtype=torch.float32)
TypeError: expected np.ndarray (got numpy.ndarray)

Reproduction

import torch
from diffusers import FluxPipeline
pipe = FluxPipeline.from_pretrained("black-forest-labs/FLUX.1-dev", torch_dtype=torch.bfloat16)
pipe.to("cuda")
prompt = "A cat holding a sign that says hello world"
# Depending on the variant being used, the pipeline call will slightly vary.
# Refer to the pipeline documentation for more details.
image = pipe(prompt, num_inference_steps=4, guidance_scale=0.0).images[0]
image.save("flux.png")

with this code, it report the error as following: (flux) xiangyu@gpu06:~/st/flux$ python gen.py Loading pipeline components...: 0%| | 0/7 [00:00<?, ?it/s]Traceback (most recent call last): File "/scr/user/xiangyu/flux/gen.py", line 4, in pipe = FluxPipeline.from_pretrained("black-forest-labs/FLUX.1-dev", torch_dtype=torch.bfloat16) File "/home/user/xiangyu/.conda/envs/flux/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py", line 114, in _inner_fn return fn(args, kwargs) File "/home/user/xiangyu/.conda/envs/flux/lib/python3.10/site-packages/diffusers/pipelines/pipeline_utils.py", line 876, in from_pretrained loaded_sub_model = load_sub_model( File "/home/user/xiangyu/.conda/envs/flux/lib/python3.10/site-packages/diffusers/pipelines/pipeline_loading_utils.py", line 700, in load_sub_model loaded_sub_model = load_method(os.path.join(cached_folder, name), loading_kwargs) File "/home/user/xiangyu/.conda/envs/flux/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py", line 114, in _inner_fn return fn(args, kwargs) File "/home/user/xiangyu/.conda/envs/flux/lib/python3.10/site-packages/diffusers/schedulers/scheduling_utils.py", line 157, in from_pretrained return cls.from_config(config, return_unused_kwargs=return_unused_kwargs, kwargs) File "/home/user/xiangyu/.conda/envs/flux/lib/python3.10/site-packages/diffusers/configuration_utils.py", line 260, in from_config model = cls(*init_dict) File "/home/user/xiangyu/.conda/envs/flux/lib/python3.10/site-packages/diffusers/configuration_utils.py", line 653, in inner_init init(self, args, **init_kwargs) File "/home/user/xiangyu/.conda/envs/flux/lib/python3.10/site-packages/diffusers/schedulers/scheduling_flow_match_euler_discrete.py", line 76, in init timesteps = torch.from_numpy(timesteps).to(dtype=torch.float32) TypeError: expected np.ndarray (got numpy.ndarray)

Logs

No response

System Info

in linux system, and software version as following: Package Version


accelerate 0.33.0 Brotli 1.0.9 certifi 2024.7.4 charset-normalizer 3.3.2 diffusers 0.30.0.dev0 filelock 3.13.1 fsspec 2024.6.1 gmpy2 2.1.2 huggingface-hub 0.24.5 idna 3.7 importlib_metadata 8.2.0 Jinja2 3.1.4 MarkupSafe 2.1.3 mkl-fft 1.3.8 mkl-random 1.2.4 mkl-service 2.4.0 mpmath 1.3.0 networkx 3.3 numpy 1.26.4 packaging 24.1 pillow 10.4.0 pip 24.0 protobuf 5.27.3 psutil 6.0.0 PySocks 1.7.1 PyYAML 6.0.1 regex 2024.7.24 requests 2.32.3 safetensors 0.4.3 sentencepiece 0.2.0 setuptools 69.5.1 sympy 1.12 tokenizers 0.19.1 torch 2.4.0 torchaudio 2.4.0 torchvision 0.19.0 tqdm 4.66.4 transformers 4.43.3 triton 3.0.0 typing_extensions 4.11.0 urllib3 2.2.2 wheel 0.43.0 zipp 3.19.2

Who can help?

@sayakpaul @DN6 tks for help!

sayakpaul commented 2 months ago

Can you please format the code and the error in a better manner?

Additionally, which diffusers version are you using?

lmmx commented 1 month ago

I'm getting the same


Loading pipeline components...:   0%|          | 0/7 [00:00<?, ?it/s]
Loading pipeline components...:  14%|█▍        | 1/7 [00:00<00:00,  7.30it/s]
Loading pipeline components...:  29%|██▊       | 2/7 [00:00<00:00, 11.48it/s]
Traceback (most recent call last):
  File "/home/louis/lab/flux/demo.py", line 4, in <module>
    pipe = FluxPipeline.from_pretrained(
  File "/home/louis/miniconda3/envs/diffusers/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py", line 114, in _inner_fn
    return fn(*args, **kwargs)
  File "/home/louis/miniconda3/envs/diffusers/lib/python3.10/site-packages/diffusers/pipelines/pipeline_utils.py", line 876, in from_pretrained
    loaded_sub_model = load_sub_model(
  File "/home/louis/miniconda3/envs/diffusers/lib/python3.10/site-packages/diffusers/pipelines/pipeline_loading_utils.py", line 700, in load_sub_model
    loaded_sub_model = load_method(os.path.join(cached_folder, name), **loading_kwargs)
  File "/home/louis/miniconda3/envs/diffusers/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py", line 114, in _inner_fn
    return fn(*args, **kwargs)
  File "/home/louis/miniconda3/envs/diffusers/lib/python3.10/site-packages/diffusers/schedulers/scheduling_utils.py", line 157, in from_pretrained
    return cls.from_config(config, return_unused_kwargs=return_unused_kwargs, **kwargs)
  File "/home/louis/miniconda3/envs/diffusers/lib/python3.10/site-packages/diffusers/configuration_utils.py", line 260, in from_config
    model = cls(**init_dict)
  File "/home/louis/miniconda3/envs/diffusers/lib/python3.10/site-packages/diffusers/configuration_utils.py", line 653, in inner_init
    init(self, *args, **init_kwargs)
  File "/home/louis/miniconda3/envs/diffusers/lib/python3.10/site-packages/diffusers/schedulers/scheduling_flow_match_euler_discrete.py", line 76, in __init__
    timesteps = torch.from_numpy(timesteps).to(dtype=torch.float32)
TypeError: expected np.ndarray (got numpy.ndarray)

My code is:

import torch
from diffusers import FluxPipeline

pipe = FluxPipeline.from_pretrained(
    "black-forest-labs/FLUX.1-schnell", torch_dtype=torch.bfloat16
)
pipe.enable_model_cpu_offload()  # save some VRAM by offloading the model to CPU. Remove this if you have enough GPU power

prompt = "A cat holding a sign that says hello world"
image = pipe(
    prompt,
    guidance_scale=0.0,
    output_type="pil",
    num_inference_steps=4,
    max_sequence_length=256,
    generator=torch.Generator("cpu").manual_seed(0),
).images[0]
image.save("flux-schnell.png")

My env (pip list) is

Package            Version
------------------ -----------
accelerate         0.33.0
Brotli             1.0.9
certifi            2024.7.4
charset-normalizer 3.3.2
diffusers          0.30.0.dev0
filelock           3.13.1
fsspec             2024.6.1
gmpy2              2.1.2
huggingface-hub    0.24.5
idna               3.7
importlib_metadata 8.2.0
Jinja2             3.1.4
MarkupSafe         2.1.3
mkl-fft            1.3.8
mkl-random         1.2.4
mkl-service        2.4.0
mpmath             1.3.0
networkx           3.3
numpy              1.26.4
packaging          24.1
pillow             10.4.0
pip                24.0
protobuf           5.27.3
psutil             6.0.0
PySocks            1.7.1
PyYAML             6.0.1
regex              2024.7.24
requests           2.32.3
safetensors        0.4.3
sentencepiece      0.2.0
setuptools         69.5.1
sympy              1.12
tokenizers         0.19.1
torch              2.4.0
torchaudio         2.4.0
torchvision        0.19.0
tqdm               4.66.5
transformers       4.43.3
triton             3.0.0
typing_extensions  4.11.0
urllib3            2.2.2
wheel              0.43.0
zipp               3.19.2
sayakpaul commented 1 month ago

Cc: @yiyixuxu

lmmx commented 1 month ago

It's shown up here too https://discuss.pytorch.org/t/why-do-i-get-typeerror-expected-np-ndarray-got-numpy-ndarray-when-i-use-torch-from-numpy-function/37525/7

and the advice was to decrement the version. In this case the version is the last 1.x version before the v2 upgrade, and indeed decrementing (pip install "numpy<1.26.4") fixes it.

Upgrading without the v2 pin causes a resolution conflict with accelerate's pin:

Installing collected packages: numpy                                                           
  Attempting uninstall: numpy                                                                  
    Found existing installation: numpy 1.26.4                                                  
    Uninstalling numpy-1.26.4:                                                                 
      Successfully uninstalled numpy-1.26.4                                                    
ERROR: pip's dependency resolver does not currently take into account all the packages that are
 installed. This behaviour is the source of the following dependency conflicts.                
accelerate 0.33.0 requires numpy<2.0.0,>=1.17, but you have numpy 2.0.1 which is incompatible.
sayakpaul commented 1 month ago

Ah. Ccing @DN6 here too because of the numpy version problem.

TypeError: expected np.ndarray (got numpy.ndarray)

So it seems like torch.from_numpy() side problem no? Or is there a better way to solve it other than downgrading the numpy version?

yiyixuxu commented 1 month ago

would love to fix this but unfortunately somehow cannot reproduce it https://colab.research.google.com/drive/1Ejv2KMyPVdHfn_uXIbP9RAR0WTwBSqSr?usp=sharing

did I miss anything?

sayakpaul commented 1 month ago

This is numpy versioning problem. See: https://github.com/huggingface/diffusers/issues/9069#issuecomment-2267596351

github-actions[bot] commented 2 weeks ago

This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.

Please note that issues that do not follow the contributing guidelines are likely to be ignored.