LLaVA-VL / LLaVA-NeXT

Apache License 2.0
2.4k stars 166 forks source link

Cannot access gated repo #140

Open FSet89 opened 1 month ago

FSet89 commented 1 month ago

I tried the demo code and got an error:

from llava.model.builder import load_pretrained_model
from llava.mm_utils import get_model_name_from_path, process_images, tokenizer_image_token
from llava.constants import IMAGE_TOKEN_INDEX, DEFAULT_IMAGE_TOKEN, DEFAULT_IM_START_TOKEN, DEFAULT_IM_END_TOKEN, IGNORE_INDEX
from llava.conversation import conv_templates, SeparatorStyle

from PIL import Image
import requests
import copy
import torch

pretrained = "lmms-lab/llama3-llava-next-8b"
model_name = "llava_llama3"
device = "cuda"
device_map = "auto"
tokenizer, model, image_processor, max_length = load_pretrained_model(pretrained, None, model_name, device_map=device_map) # Add any other thing you want to pass in llava_model_args

model.eval()
model.tie_weights()

image = Image.open("Images/erba.jpeg")
image_tensor = process_images([image], image_processor, model.config)
image_tensor = [_image.to(dtype=torch.float16, device=device) for _image in image_tensor]

conv_template = "llava_llama_3" # Make sure you use correct chat template for different models
question = DEFAULT_IMAGE_TOKEN + "\nWhat is shown in this image?"
conv = copy.deepcopy(conv_templates[conv_template])
conv.append_message(conv.roles[0], question)
conv.append_message(conv.roles[1], None)
prompt_question = conv.get_prompt()

input_ids = tokenizer_image_token(prompt_question, tokenizer, IMAGE_TOKEN_INDEX, return_tensors="pt").unsqueeze(0).to(device)
image_sizes = [image.size]

cont = model.generate(
    input_ids,
    images=image_tensor,
    image_sizes=image_sizes,
    do_sample=False,
    temperature=0,
    max_new_tokens=256,
)
text_outputs = tokenizer.batch_decode(cont, skip_special_tokens=True)
print(text_outputs)

/opt/conda/envs/llava/lib/python3.10/site-packages/huggingface_hub/file_download.py:1150: FutureWarning: resume_download is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use force_download=True. warnings.warn( Traceback (most recent call last): File "/opt/conda/envs/llava/lib/python3.10/site-packages/huggingface_hub/utils/_errors.py", line 304, in hf_raise_for_status response.raise_for_status() File "/opt/conda/envs/llava/lib/python3.10/site-packages/requests/models.py", line 1024, in raise_for_status raise HTTPError(http_error_msg, response=self) requests.exceptions.HTTPError: 401 Client Error: Unauthorized for url: https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct/resolve/main/config.json

The above exception was the direct cause of the following exception:

Traceback (most recent call last): File "/opt/conda/envs/llava/lib/python3.10/site-packages/transformers/utils/hub.py", line 398, in cached_file resolved_file = hf_hub_download( File "/opt/conda/envs/llava/lib/python3.10/site-packages/huggingface_hub/utils/_deprecation.py", line 101, in inner_f return f(*args, kwargs) File "/opt/conda/envs/llava/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py", line 114, in _inner_fn return fn(*args, *kwargs) File "/opt/conda/envs/llava/lib/python3.10/site-packages/huggingface_hub/file_download.py", line 1240, in hf_hub_download return _hf_hub_download_to_cache_dir( File "/opt/conda/envs/llava/lib/python3.10/site-packages/huggingface_hub/file_download.py", line 1347, in _hf_hub_download_to_cache_dir _raise_on_head_call_error(head_call_error, force_download, local_files_only) File "/opt/conda/envs/llava/lib/python3.10/site-packages/huggingface_hub/file_download.py", line 1854, in _raise_on_head_call_error raise head_call_error File "/opt/conda/envs/llava/lib/python3.10/site-packages/huggingface_hub/file_download.py", line 1751, in _get_metadata_or_catch_error metadata = get_hf_file_metadata( File "/opt/conda/envs/llava/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py", line 114, in _inner_fn return fn(args, kwargs) File "/opt/conda/envs/llava/lib/python3.10/site-packages/huggingface_hub/file_download.py", line 1673, in get_hf_file_metadata r = _request_wrapper( File "/opt/conda/envs/llava/lib/python3.10/site-packages/huggingface_hub/file_download.py", line 376, in _request_wrapper response = _request_wrapper( File "/opt/conda/envs/llava/lib/python3.10/site-packages/huggingface_hub/file_download.py", line 400, in _request_wrapper hf_raise_for_status(response) File "/opt/conda/envs/llava/lib/python3.10/site-packages/huggingface_hub/utils/_errors.py", line 321, in hf_raise_for_status raise GatedRepoError(message, response) from e huggingface_hub.utils._errors.GatedRepoError: 401 Client Error. (Request ID: Root=1-66bb57da-67de261c7795c3020e9c5542;fa1d4179-923d-4fba-b10a-450e87d4496c)

Cannot access gated repo for url https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct/resolve/main/config.json. Access to model meta-llama/Meta-Llama-3-8B-Instruct is restricted. You must be authenticated to access it.

The above exception was the direct cause of the following exception:

Traceback (most recent call last): File "/home/ubuntu/test_llava.py", line 4, in from llava.conversation import conv_templates, SeparatorStyle File "/home/ubuntu/LLaVA-NeXT/llava/conversation.py", line 387, in tokenizer=AutoTokenizer.from_pretrained("meta-llama/Meta-Llama-3-8B-Instruct"), File "/opt/conda/envs/llava/lib/python3.10/site-packages/transformers/models/auto/tokenization_auto.py", line 801, in from_pretrained config = AutoConfig.from_pretrained( File "/opt/conda/envs/llava/lib/python3.10/site-packages/transformers/models/auto/configuration_auto.py", line 915, in from_pretrained config_dict, unused_kwargs = PretrainedConfig.get_config_dict(pretrained_model_name_or_path, kwargs) File "/opt/conda/envs/llava/lib/python3.10/site-packages/transformers/configuration_utils.py", line 631, in get_config_dict config_dict, kwargs = cls._get_config_dict(pretrained_model_name_or_path, kwargs) File "/opt/conda/envs/llava/lib/python3.10/site-packages/transformers/configuration_utils.py", line 686, in _get_config_dict resolved_config_file = cached_file( File "/opt/conda/envs/llava/lib/python3.10/site-packages/transformers/utils/hub.py", line 416, in cached_file raise EnvironmentError( OSError: You are trying to access a gated repo. Make sure to have access to it at https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct. 401 Client Error. (Request ID: Root=1-66bb57da-67de261c7795c3020e9c5542;fa1d4179-923d-4fba-b10a-450e87d4496c)

Cannot access gated repo for url https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct/resolve/main/config.json. Access to model meta-llama/Meta-Llama-3-8B-Instruct is restricted. You must be authenticated to access it.

chenhongwu127 commented 3 weeks ago

遇到同样的问题,这个怎么解决呢?

FSet89 commented 3 weeks ago

I solved by creating a token on Huggingface, requesting an access to meta-llama/Meta-Llama-3-8B-Instruct and using this code:

from huggingface_hub import login
print("Login...")
# Huggingface login
TOKEN = 'XXX' # your huggingface token
login(token=TOKEN)
doanaktar commented 3 weeks ago

Hi @FSet89 , i'm having the same problem too. Did you find any solution?

FSet89 commented 3 weeks ago

Hi @FSet89 , i'm having the same problem too. Did you find any solution?

Hi, yes check my previous comment