huggingface / diffusers

🤗 Diffusers: State-of-the-art diffusion models for image and audio generation in PyTorch and FLAX.
https://huggingface.co/docs/diffusers
Apache License 2.0
24.21k stars 4.99k forks source link

PackageNotFoundError: No package metadata was found for bitsandbytes #3194

Closed EnricoBeltramo closed 1 year ago

EnricoBeltramo commented 1 year ago

Describe the bug

I have a working configuration to load a text2img diffuser model with diffuser 0.12.1. When I switch of diffuser version >= 0.13.0, I have an error: PackageNotFoundError: No package metadata was found for bitsandbytes

Some dependencies are changed?

Reproduction

Logs

╭─────────────────────────────── Traceback (most recent call last) ────────────────────────────────╮
│ /opt/conda/lib/python3.7/site-packages/importlib_metadata/__init__.py:381 in from_name           │
│                                                                                                  │
│   378 │   │   if not name:                                                                       │
│   379 │   │   │   raise ValueError("A distribution name is required.")                           │
│   380 │   │   try:                                                                               │
│ ❱ 381 │   │   │   return next(cls.discover(name=name))                                           │
│   382 │   │   except StopIteration:                                                              │
│   383 │   │   │   raise PackageNotFoundError(name)                                               │
│   384                                                                                            │
╰──────────────────────────────────────────────────────────────────────────────────────────────────╯
StopIteration

During handling of the above exception, another exception occurred:

╭─────────────────────────────── Traceback (most recent call last) ────────────────────────────────╮
│ /tmp/ipykernel_72/4245135796.py:2 in <module>                                                    │
│                                                                                                  │
│ [Errno 2] No such file or directory: '/tmp/ipykernel_72/4245135796.py'                           │
│                                                                                                  │
│ /tmp/ipykernel_72/36998241.py:86 in loadModelText2Img                                            │
│                                                                                                  │
│ [Errno 2] No such file or directory: '/tmp/ipykernel_72/36998241.py'                             │
│                                                                                                  │
│ /opt/conda/lib/python3.7/site-packages/diffusers/pipelines/pipeline_utils.py:1037 in             │
│ from_pretrained                                                                                  │
│                                                                                                  │
│   1034 │   │                                                                                     │
│   1035 │   │   Args:                                                                             │
│   1036 │   │   │   slice_size (`str` or `int`, *optional*, defaults to `"auto"`):                │
│ ❱ 1037 │   │   │   │   When `"auto"`, halves the input to the attention heads, so attention wil  │
│   1038 │   │   │   │   `"max"`, maxium amount of memory will be saved by running only one slice  │
│   1039 │   │   │   │   provided, uses as many slices as `attention_head_dim // slice_size`. In   │
│   1040 │   │   │   │   must be a multiple of `slice_size`.                                       │
│                                                                                                  │
│ /opt/conda/lib/python3.7/site-packages/diffusers/pipelines/pipeline_utils.py:445 in              │
│ load_sub_model                                                                                   │
│                                                                                                  │
│    442 │   │   │   │   To have Accelerate compute the most optimized `device_map` automatically  │
│    443 │   │   │   │   more information about each option see [designing a device                │
│    444 │   │   │   │   map](https://hf.co/docs/accelerate/main/en/usage_guides/big_modeling#des  │
│ ❱  445 │   │   │   low_cpu_mem_usage (`bool`, *optional*, defaults to `True` if torch version >  │
│    446 │   │   │   │   Speed up model loading by not initializing the weights and only loading   │
│    447 │   │   │   │   also tries to not use more than 1x model size in CPU memory (including p  │
│    448 │   │   │   │   model. This is only supported when torch version >= 1.9.0. If you are us  │
│                                                                                                  │
│ /opt/conda/lib/python3.7/site-packages/transformers/modeling_utils.py:2177 in from_pretrained    │
│                                                                                                  │
│   2174 │   │   use_safetensors = kwargs.pop("use_safetensors", None if is_safetensors_available  │
│   2175 │   │                                                                                     │
│   2176 │   │   if is_bitsandbytes_available():                                                   │
│ ❱ 2177 │   │   │   is_8bit_serializable = version.parse(importlib_metadata.version("bitsandbyte  │
│   2178 │   │   else:                                                                             │
│   2179 │   │   │   is_8bit_serializable = False                                                  │
│   2180                                                                                           │
│                                                                                                  │
│ /opt/conda/lib/python3.7/site-packages/importlib_metadata/__init__.py:832 in version             │
│                                                                                                  │
│   829 │   :return: The version string for the package as defined in the package's                │
│   830 │   │   "Version" metadata key.                                                            │
│   831 │   """                                                                                    │
│ ❱ 832 │   return distribution(distribution_name).version                                         │
│   833                                                                                            │
│   834                                                                                            │
│   835 _unique = functools.partial(                                                               │
│                                                                                                  │
│ /opt/conda/lib/python3.7/site-packages/importlib_metadata/__init__.py:805 in distribution        │
│                                                                                                  │
│   802 │   :param distribution_name: The name of the distribution package as a string.            │
│   803 │   :return: A ``Distribution`` instance (or subclass thereof).                            │
│   804 │   """                                                                                    │
│ ❱ 805 │   return Distribution.from_name(distribution_name)                                       │
│   806                                                                                            │
│   807                                                                                            │
│   808 def distributions(**kwargs):                                                               │
│                                                                                                  │
│ /opt/conda/lib/python3.7/site-packages/importlib_metadata/__init__.py:383 in from_name           │
│                                                                                                  │
│   380 │   │   try:                                                                               │
│   381 │   │   │   return next(cls.discover(name=name))                                           │
│   382 │   │   except StopIteration:                                                              │
│ ❱ 383 │   │   │   raise PackageNotFoundError(name)                                               │
│   384 │                                                                                          │
│   385 │   @classmethod                                                                           │
│   386 │   def discover(cls, **kwargs):                                                           │
╰──────────────────────────────────────────────────────────────────────────────────────────────────╯
PackageNotFoundError: No package metadata was found for bitsandbytes

System Info

EnricoBeltramo commented 1 year ago

I fixed with a little workaround. In my env I use dreambooth too, and to let it work I need to specify the cuda version i.e.: pip install bitsandbytes-cuda110 this wasn't an issue for diffuser 0.12 but raise an error on >=0.13

At moment I fixed installing both version of bitsandbytes: pip install bitsandbytes-cuda110 bitsandbytes

There is a better solution?

patrickvonplaten commented 1 year ago

Hey @EnricoBeltramo,

I cannot reproduce the issue sadly

chiragjn commented 1 year ago

I know this is not diffusers related, but I get the same issue with transformers With these two installed

bitsandbytes-cuda117==0.26.0.post2
transformers[audio,deepspeed,ftfy,onnx,sentencepiece,timm,tokenizers,video,vision]==4.28.1

and without bitsandbytes I run into the same issue because bitsandbytes and bitsandbytes-cuda117 behave as two different packages

 File "/opt/conda/lib/python3.8/site-packages/transformers/models/auto/auto_factory.py", line 471, in from_pretrained
    return model_class.from_pretrained(
  File "/opt/conda/lib/python3.8/site-packages/transformers/modeling_utils.py", line 2177, in from_pretrained
    is_8bit_serializable = version.parse(importlib_metadata.version("bitsandbytes")) > version.parse("0.37.2")
  File "/opt/conda/lib/python3.8/importlib/metadata.py", line 530, in version
    return distribution(distribution_name).version
  File "/opt/conda/lib/python3.8/importlib/metadata.py", line 503, in distribution
    return Distribution.from_name(distribution_name)
  File "/opt/conda/lib/python3.8/importlib/metadata.py", line 177, in from_name
    raise PackageNotFoundError(name)
importlib.metadata.PackageNotFoundError: bitsandbytes

installing both fixes the issue for me

github-actions[bot] commented 1 year ago

This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.

Please note that issues that do not follow the contributing guidelines are likely to be ignored.

vladimircape commented 1 year ago

also have such problem

patrickvonplaten commented 6 months ago

Can someone add a link to a google colab that reproduces this issue? I'm using PyPI (and not conda) and have Pytorch 2.x installed. I cannot reproduce the issue

chiragjn commented 6 months ago

I think it does not matter now because bitsandbytes ships with libs for all cuda versions and one is picked according to whatever torch is using or installed on the system. So practically bitsandbytes-cuxxx packages are no longer needed

RyanKoech commented 6 months ago

@patrickvonplaten Experiencing it running this repo.

VikramTiwari commented 5 months ago

Experiencing it while trying to run this finetuning example: https://huggingface.co/blog/g-ronimo/phinetuning

I have created a small sample colab for folks to repro: https://colab.research.google.com/drive/1e3vfyQx8HCnAiYpmzaca7xH545uvtCC9?usp=sharing

ahyunsoo3 commented 5 months ago

@VikramTiwari You can easily solve the problem by installing.

pip install bitsandbytes

I think the unfamiliar error message confused people.

TanetiSanjay commented 5 months ago

Try using

pip install bitsandbytes

I ran through the same error and this command fixed the errors for me

Shorya22 commented 2 months ago

Hello All,

I have done a simple project which is Text Summarization and I used T5 Model and I successfully fine tuned my model and also inference it on notebook.i have installed bitsandbytes, accelerate,trl and all.

But when push it on hugginface and inference it on server less API. I get error "No package metadata was found for bitsandbytes".

Please suggest me how I can resolve this issue.

UnauthorizedShelley commented 1 month ago

Try using

pip install bitsandbytes

I ran through the same error and this command fixed the errors for me

I don't know why but this really works!!!

Shorya22 commented 1 month ago

Yes because we need to install bitsandbytes then only we can use Quantization in LLM.

On Tue, Jun 18, 2024 at 2:37 PM Lychee @.***> wrote:

Try using

pip install bitsandbytes

I ran through the same error and this command fixed the errors for me

I don't know why but this really works!!!

— Reply to this email directly, view it on GitHub https://github.com/huggingface/diffusers/issues/3194#issuecomment-2175592117, or unsubscribe https://github.com/notifications/unsubscribe-auth/A3L2YV5ZYKO7AKLDVFUJSDDZH72F3AVCNFSM6AAAAAAXHV4DUWVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDCNZVGU4TEMJRG4 . You are receiving this because you commented.Message ID: @.***>