huggingface / optimum-nvidia

Apache License 2.0
867 stars 86 forks source link

Original model configuration (config.json) was not found error during running inference using "Llama-2-7b-chat-hf" #91

Open raorajendra opened 6 months ago

raorajendra commented 6 months ago

I am running below command inside docker container. python3 text-generation.py meta-llama/Llama-2-7b-chat-hf /opt/optimum_nvidia

I am facing issue below. Traceback (most recent call last): File "/opt/optimum-nvidia/examples/text-generation.py", line 67, in model = AutoModelForCausalLM.from_pretrained(args.model, use_fp8=args.fp8) File "/usr/local/lib/python3.10/dist-packages/huggingface_hub/utils/_validators.py", line 118, in _inner_fn return fn(*args, *kwargs) File "/usr/local/lib/python3.10/dist-packages/huggingface_hub/hub_mixin.py", line 296, in from_pretrained instance = cls._from_pretrained( File "/opt/optimum-nvidia/src/optimum/nvidia/models/auto.py", line 68, in _from_pretrained model = model_clazz.from_pretrained( File "/usr/local/lib/python3.10/dist-packages/huggingface_hub/utils/_validators.py", line 118, in _inner_fn return fn(args, **kwargs) File "/usr/local/lib/python3.10/dist-packages/huggingface_hub/hub_mixin.py", line 296, in from_pretrained instance = cls._from_pretrained( File "/opt/optimum-nvidia/src/optimum/nvidia/hub.py", line 228, in _from_pretrained raise ValueError( ValueError: Original model configuration (config.json) was not found.The model configuration is required to build Tensor