TencentARC / InstantMesh

InstantMesh: Efficient 3D Mesh Generation from a Single Image with Sparse-view Large Reconstruction Models
Apache License 2.0
3.41k stars 368 forks source link

ImportError: cannot import name 'split_torch_state_dict_into_shards' from 'huggingface_hub' #119

Open ylchan87 opened 4 months ago

ylchan87 commented 4 months ago

Problem

spinning up docker with commit 7fe9562 (latest commit as of 8Jul2024) fails with:

Traceback (most recent call last):
  File "/workspace/instantmesh/app.py", line 12, in <module>
    from diffusers import DiffusionPipeline, EulerAncestralDiscreteScheduler
  File "/workspace/miniconda3/envs/instantmesh/lib/python3.10/site-packages/diffusers/__init__.py", line 3, in <module>
    from .configuration_utils import ConfigMixin
  File "/workspace/miniconda3/envs/instantmesh/lib/python3.10/site-packages/diffusers/configuration_utils.py", line 34, in <module>
    from .utils import (
  File "/workspace/miniconda3/envs/instantmesh/lib/python3.10/site-packages/diffusers/utils/__init__.py", line 21, in <module>
    from .accelerate_utils import apply_forward_hook
  File "/workspace/miniconda3/envs/instantmesh/lib/python3.10/site-packages/diffusers/utils/accelerate_utils.py", line 24, in <module>
    import accelerate
  File "/workspace/miniconda3/envs/instantmesh/lib/python3.10/site-packages/accelerate/__init__.py", line 16, in <module>
    from .accelerator import Accelerator
  File "/workspace/miniconda3/envs/instantmesh/lib/python3.10/site-packages/accelerate/accelerator.py", line 34, in <module>
    from huggingface_hub import split_torch_state_dict_into_shards
ImportError: cannot import name 'split_torch_state_dict_into_shards' from 'huggingface_hub' (/workspace/miniconda3/envs/instantmesh/lib/python3.10/site-packages/huggingface_hub/__init__.py)

Quick fix

lock package accelarate to 0.31.0 in requirements.txt

i.e.

$ git diff requirements.txt
diff --git a/requirements.txt b/requirements.txt
index aa68bbd..7b606cd 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -5,7 +5,7 @@ einops
 omegaconf
 torchmetrics
 webdataset
-accelerate
+accelerate==0.31.0
 tensorboard
 PyMCubes
Utkarsh4430 commented 4 months ago

The error you're encountering is due to the split_torch_state_dict_into_shards function not being available in huggingface-hub version 0.20.3. This function is included starting from version 0.23.0.

To resolve this issue, update the huggingface-hub library to version 0.23.0 or later:

blakejrobinson commented 2 months ago

The error you're encountering is due to the split_torch_state_dict_into_shards function not being available in huggingface-hub version 0.20.3. This function is included starting from version 0.23.0.

To resolve this issue, update the huggingface-hub library to version 0.23.0 or later:

Why is huggingface-hub 0.17.1 installed when the installation instructions are followed if this is the case?

CarterYancey commented 2 months ago

For me, the issue was NOT the huggingface-hub version. It was the accelerate version. Running pip install accelerate==0.31.0 or adding it to the end of the Dockerfile fixed it for me.

https://github.com/run-llama/llama_index/discussions/14605

Darth-Carrotpie commented 6 days ago

The error you're encountering is due to the split_torch_state_dict_into_shards function not being available in huggingface-hub version 0.20.3. This function is included starting from version 0.23.0.

To resolve this issue, update the huggingface-hub library to version 0.23.0 or later:

If you try this, you will get an error, due to tokenizers dependency:

#21 16.02 ERROR: ResolutionImpossible: for help visit https://pip.pypa.io/en/latest/topics/dependency-resolution/#dealing-with-dependency-conflicts
#21 ERROR: process "/bin/sh -c pip install -r requirements.txt" did not complete successfully: exit code: 1
------
 > [17/18] RUN pip install -r requirements.txt:
16.01     transformers 4.34.1 depends on huggingface-hub<1.0 and >=0.16.4

16.01     diffusers 0.20.2 depends on huggingface-hub>=0.13.2
16.01     gradio-client 0.5.0 depends on huggingface-hub>=0.13.0
16.01     tokenizers 0.14.0 depends on huggingface_hub<0.17 and >=0.16.4