I have this issue when I attempt demo for the llama2accessory. I have 1 A100 48GB GPU and 40GB RAM system config. I have installed all the libraries I am prompted to install until now. I have installed Hugginface_hub module yet it is not able to find the module. What probably is causing this issue. Please help.
I have already downloaded the model.
demo.sh in accessory directory
$ bash ./demos/start.sh
Do you want to download the model? (y/n):
n
Choose the inference scenario:
1) Single-turn Dialogue (Single-modal)
2) Multi-turn Dialogue (Single-modal)
3) Single-turn Dialogue (Multi-modal)
4) Multi-turn Dialogue (Multi-modal)
5) Multi-turn Dialogue (Multi-modal) Box Mode
Please enter the option number: 3
Choose llama_type:
1) llama_qformerv2
2) llama_qformerv2_peft
3) llama_ens
4) llama_ens5
5) llama_ens10
6) llama_ens5p2
7) llama_ens_peft
Please enter the option number: 1
Do you want to enable Quantization? (y/n):
y
Enter the option number for model size (1: 7B, 2: 13B, 3: 34B, 4: 70B):
1
Enter the master port (default is 12345):
12345
Enter the path to params.json:
out/config/7B_params.json
Enter the path to tokenizer.model:
out/config/tokenizer.model
Enter the path to the pretrained model:
../llama2/7B/
Running single_turn_mm.py...
Traceback (most recent call last):
File "demos/single_turn_mm.py", line 5, in <module>
from model.meta import MetaModel
File "/home/harsh/LLaMA2-Accessory/accessory/model/meta.py", line 8, in <module>
from .tokenizer import Tokenizer
File "/home/harsh/LLaMA2-Accessory/accessory/model/tokenizer.py", line 2, in <module>
from transformers import AutoTokenizer
File "/home/harsh/.local/lib/python3.8/site-packages/transformers/__init__.py", line 26, in <module>
from . import dependency_versions_check
File "/home/harsh/.local/lib/python3.8/site-packages/transformers/dependency_versions_check.py", line 16, in <module>
from .utils.versions import require_version, require_version_core
File "/home/harsh/.local/lib/python3.8/site-packages/transformers/utils/__init__.py", line 18, in <module>
from huggingface_hub import get_full_repo_name # for backward compatibility
ModuleNotFoundError: No module named 'huggingface_hub'
[2023-11-18 20:27:13,892] torch.distributed.elastic.multiprocessing.api: [ERROR] failed (exitcode: 1) local_rank: 0 (pid: 483287) of binary: /usr/bin/python3
Traceback (most recent call last):
File "/home/harsh/.local/bin/torchrun", line 8, in <module>
sys.exit(main())
File "/home/harsh/.local/lib/python3.8/site-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 346, in wrapper
return f(*args, **kwargs)
File "/home/harsh/.local/lib/python3.8/site-packages/torch/distributed/run.py", line 806, in main
run(args)
File "/home/harsh/.local/lib/python3.8/site-packages/torch/distributed/run.py", line 797, in run
elastic_launch(
File "/home/harsh/.local/lib/python3.8/site-packages/torch/distributed/launcher/api.py", line 134, in __call__
return launch_agent(self._config, self._entrypoint, list(args))
File "/home/harsh/.local/lib/python3.8/site-packages/torch/distributed/launcher/api.py", line 264, in launch_agent
raise ChildFailedError(
torch.distributed.elastic.multiprocessing.errors.ChildFailedError:
============================================================
demos/single_turn_mm.py FAILED
------------------------------------------------------------
Failures:
<NO_OTHER_FAILURES>
------------------------------------------------------------
Root Cause (first observed failure):
[0]:
time : 2023-11-18_20:27:13
host : harsh
rank : 0 (local_rank: 0)
exitcode : 1 (pid: 483287)
error_file: <N/A>
traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
============================================================
You may directly try import huggingface_hub in Python interpreter, and I guess it will still raise ModuleNotFoundError. If so, please check if the huggingface_hub module is installed correctly
I have this issue when I attempt demo for the llama2accessory. I have 1 A100 48GB GPU and 40GB RAM system config. I have installed all the libraries I am prompted to install until now. I have installed Hugginface_hub module yet it is not able to find the module. What probably is causing this issue. Please help. I have already downloaded the model. demo.sh in accessory directory