haotian-liu / LLaVA

[NeurIPS'23 Oral] Visual Instruction Tuning (LLaVA) built towards GPT-4V level capabilities and beyond.
https://llava.hliu.cc
Apache License 2.0
19.72k stars 2.17k forks source link

Failed to run the model [Usage] #395

Open OrienKastor opened 1 year ago

OrienKastor commented 1 year ago

Describe the issue

Issue: Failed to run the model with an error: AttributeError: 'NoneType' object has no attribute 'is_loaded'

I apologize, I am new to this so if there is a simple solution to this sorry for the silly question.

Command:

--- TO GET IT INSTALLED ---
git clone https://github.com/haotian-liu/LLaVA.git
cd LLaVA

conda create -n llava python=3.10 -y
conda activate llava
pip install --upgrade pip  # enable PEP 660 support
pip install -e .

Error when trying to run:
RuntimeError: Failed to import transformers.models.llama.modeling_llama because of the following error (look up to see its traceback):
Descriptors cannot not be created directly.
If this call came from a _pb2.py file, your generated code is out of date and must be regenerated with protoc >= 3.19.0.
If you cannot immediately regenerate your protos, some other possible workarounds are:
 1. Downgrade the protobuf package to 3.20.x or lower.
 2. Set PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION=python (but this will use pure-Python parsing and will be much slower).

Resolution:
pip install protobuf==3.20.0

--- TO RUN IT ---

In one terminal:
(Launch a controller)

cd /home/USER/github/LLaVa/LLaVA
conda activate llava
python -m llava.serve.controller --host 0.0.0.0 --port 10000

----------------------------------
In another terminal:
(Launch a gradio web server)

cd /home/USER/github/LLaVa/LLaVA
conda activate llava
python -m llava.serve.gradio_web_server --controller http://localhost:10000 --model-list-mode reload

----------------------------------
On a web browser open the link showed on the previous terminal (http://0.0.0.0:7860)

----------------------------------
In another terminal:
(Launch a model worker)

cd /home/USER/github/LLaVa/LLaVA
conda activate llava
python -m llava.serve.model_worker --host 0.0.0.0 --controller http://localhost:10000 --port 40000 --worker http://localhost:40000 --model-name liuhaotian/LLaVA-Lightning-MPT-7B-preview --load-4bit

Log:

python -m llava.serve.model_worker --host 0.0.0.0 --controller http://localhost:10000 --port 40000 --worker http://localhost:40000 --model-name liuhaotian/LLaVA-Lightning-MPT-7B-preview
[2023-08-25 18:47:40,132] [INFO] [real_accelerator.py:110:get_accelerator] Setting ds_accelerator to cuda (auto detect)
2023-08-25 18:47:40.229722: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations:  AVX2 AVX_VNNI FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2023-08-25 18:47:40.316276: I tensorflow/core/util/util.cc:169] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.
2023-08-25 18:47:40.335442: E tensorflow/stream_executor/cuda/cuda_blas.cc:2981] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
2023-08-25 18:47:40.702715: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer.so.7'; dlerror: libnvinfer.so.7: cannot open shared object file: No such file or directory
2023-08-25 18:47:40.702761: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer_plugin.so.7'; dlerror: libnvinfer_plugin.so.7: cannot open shared object file: No such file or directory
2023-08-25 18:47:40.702791: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Cannot dlopen some TensorRT libraries. If you would like to use Nvidia GPU with TensorRT, please make sure the missing libraries mentioned above are installed properly.
2023-08-25 18:47:41 | INFO | model_worker | args: Namespace(host='0.0.0.0', port=40000, worker_address='http://localhost:40000', controller_address='http://localhost:10000', model_path='facebook/opt-350m', model_base=None, model_name='liuhaotian/LLaVA-Lightning-MPT-7B-preview', multi_modal=False, limit_model_concurrency=5, stream_interval=1, no_register=False, load_8bit=False, load_4bit=False)
2023-08-25 18:47:41 | INFO | model_worker | Loading the model liuhaotian/LLaVA-Lightning-MPT-7B-preview on worker 3b6227 ...
Downloading (…)okenizer_config.json:   0%|                | 0.00/685 [00:00<?, ?B/s]
Downloading (…)okenizer_config.json: 100%|█████████| 685/685 [00:00<00:00, 1.76MB/s]
2023-08-25 18:47:41 | ERROR | stderr | 
Downloading (…)lve/main/config.json:   0%|                | 0.00/644 [00:00<?, ?B/s]
Downloading (…)lve/main/config.json: 100%|█████████| 644/644 [00:00<00:00, 3.61MB/s]
2023-08-25 18:47:41 | ERROR | stderr | 
Downloading (…)olve/main/vocab.json:   0%|               | 0.00/899k [00:00<?, ?B/s]
Downloading (…)olve/main/vocab.json: 100%|███████| 899k/899k [00:00<00:00, 2.34MB/s]
Downloading (…)olve/main/vocab.json: 100%|███████| 899k/899k [00:00<00:00, 2.33MB/s]
2023-08-25 18:47:42 | ERROR | stderr | 
Downloading (…)olve/main/merges.txt:   0%|               | 0.00/456k [00:00<?, ?B/s]
Downloading (…)olve/main/merges.txt: 100%|███████| 456k/456k [00:00<00:00, 3.54MB/s]
Downloading (…)olve/main/merges.txt: 100%|███████| 456k/456k [00:00<00:00, 3.51MB/s]
2023-08-25 18:47:42 | ERROR | stderr | 
Downloading (…)cial_tokens_map.json:   0%|                | 0.00/441 [00:00<?, ?B/s]
Downloading (…)cial_tokens_map.json: 100%|█████████| 441/441 [00:00<00:00, 1.23MB/s]
2023-08-25 18:47:43 | ERROR | stderr | 
You are using a model of type opt to instantiate a model of type llava_mpt. This is not supported for all configurations of models and can yield errors.
Downloading pytorch_model.bin:   0%|                     | 0.00/663M [00:00<?, ?B/s]
Downloading pytorch_model.bin:   2%|▏           | 10.5M/663M [00:00<00:50, 13.0MB/s]
Downloading pytorch_model.bin:   3%|▍           | 21.0M/663M [00:01<00:54, 11.8MB/s]
Downloading pytorch_model.bin:   5%|▌           | 31.5M/663M [00:02<00:40, 15.7MB/s]
Downloading pytorch_model.bin:   6%|▊           | 41.9M/663M [00:02<00:38, 16.1MB/s]
Downloading pytorch_model.bin:   8%|▉           | 52.4M/663M [00:03<00:39, 15.3MB/s]
Downloading pytorch_model.bin:   9%|█▏          | 62.9M/663M [00:04<00:41, 14.6MB/s]
Downloading pytorch_model.bin:  11%|█▎          | 73.4M/663M [00:05<00:45, 12.8MB/s]
Downloading pytorch_model.bin:  13%|█▌          | 83.9M/663M [00:06<00:51, 11.2MB/s]
Downloading pytorch_model.bin:  14%|█▋          | 94.4M/663M [00:07<00:57, 9.90MB/s]
Downloading pytorch_model.bin:  16%|██           | 105M/663M [00:09<01:12, 7.66MB/s]
Downloading pytorch_model.bin:  17%|██▎          | 115M/663M [00:11<01:18, 6.99MB/s]
Downloading pytorch_model.bin:  19%|██▍          | 126M/663M [00:12<01:10, 7.63MB/s]
Downloading pytorch_model.bin:  21%|██▋          | 136M/663M [00:13<01:04, 8.10MB/s]
Downloading pytorch_model.bin:  22%|██▉          | 147M/663M [00:15<01:12, 7.09MB/s]
Downloading pytorch_model.bin:  24%|███          | 157M/663M [00:16<01:06, 7.62MB/s]
Downloading pytorch_model.bin:  25%|███▎         | 168M/663M [00:18<01:10, 6.97MB/s]
Downloading pytorch_model.bin:  27%|███▍         | 178M/663M [00:20<01:18, 6.17MB/s]
Downloading pytorch_model.bin:  28%|███▋         | 189M/663M [00:23<01:27, 5.44MB/s]
Downloading pytorch_model.bin:  30%|███▉         | 199M/663M [00:25<01:30, 5.13MB/s]
Downloading pytorch_model.bin:  32%|████         | 210M/663M [00:27<01:26, 5.21MB/s]
Downloading pytorch_model.bin:  33%|████▎        | 220M/663M [00:29<01:21, 5.44MB/s]
Downloading pytorch_model.bin:  35%|████▌        | 231M/663M [00:32<01:33, 4.61MB/s]
Downloading pytorch_model.bin:  36%|████▋        | 241M/663M [00:34<01:24, 4.99MB/s]
Downloading pytorch_model.bin:  38%|████▉        | 252M/663M [00:35<01:13, 5.62MB/s]
Downloading pytorch_model.bin:  40%|█████▏       | 262M/663M [00:36<01:06, 6.06MB/s]
Downloading pytorch_model.bin:  41%|█████▎       | 273M/663M [00:38<01:00, 6.44MB/s]
Downloading pytorch_model.bin:  43%|█████▌       | 283M/663M [00:39<00:55, 6.81MB/s]
Downloading pytorch_model.bin:  44%|█████▊       | 294M/663M [00:41<00:54, 6.73MB/s]
Downloading pytorch_model.bin:  46%|█████▉       | 304M/663M [00:42<00:54, 6.59MB/s]
Downloading pytorch_model.bin:  47%|██████▏      | 315M/663M [00:43<00:48, 7.23MB/s]
Downloading pytorch_model.bin:  49%|██████▍      | 325M/663M [00:45<00:43, 7.81MB/s]
Downloading pytorch_model.bin:  51%|██████▌      | 336M/663M [00:45<00:37, 8.79MB/s]
Downloading pytorch_model.bin:  52%|██████▊      | 346M/663M [00:47<00:35, 8.84MB/s]
Downloading pytorch_model.bin:  54%|██████▉      | 357M/663M [00:51<00:59, 5.17MB/s]
Downloading pytorch_model.bin:  55%|███████▏     | 367M/663M [00:53<00:57, 5.12MB/s]
Downloading pytorch_model.bin:  57%|███████▍     | 377M/663M [00:54<00:52, 5.44MB/s]
Downloading pytorch_model.bin:  59%|███████▌     | 388M/663M [00:56<00:46, 5.95MB/s]
Downloading pytorch_model.bin:  60%|███████▊     | 398M/663M [00:57<00:39, 6.72MB/s]
Downloading pytorch_model.bin:  62%|████████     | 409M/663M [00:58<00:32, 7.82MB/s]
Downloading pytorch_model.bin:  63%|████████▏    | 419M/663M [00:59<00:30, 8.02MB/s]
Downloading pytorch_model.bin:  65%|████████▍    | 430M/663M [01:00<00:28, 8.02MB/s]
Downloading pytorch_model.bin:  66%|████████▋    | 440M/663M [01:02<00:31, 7.12MB/s]
Downloading pytorch_model.bin:  68%|████████▊    | 451M/663M [01:03<00:27, 7.76MB/s]
Downloading pytorch_model.bin:  70%|█████████    | 461M/663M [01:04<00:22, 8.81MB/s]
Downloading pytorch_model.bin:  71%|█████████▎   | 472M/663M [01:05<00:23, 8.25MB/s]
Downloading pytorch_model.bin:  73%|█████████▍   | 482M/663M [01:06<00:20, 8.95MB/s]
Downloading pytorch_model.bin:  74%|█████████▋   | 493M/663M [01:07<00:17, 9.80MB/s]
Downloading pytorch_model.bin:  76%|█████████▉   | 503M/663M [01:08<00:15, 10.5MB/s]
Downloading pytorch_model.bin:  78%|██████████   | 514M/663M [01:09<00:14, 10.4MB/s]
Downloading pytorch_model.bin:  79%|██████████▎  | 524M/663M [01:12<00:19, 7.12MB/s]
Downloading pytorch_model.bin:  81%|██████████▍  | 535M/663M [01:13<00:18, 6.98MB/s]
Downloading pytorch_model.bin:  82%|██████████▋  | 545M/663M [01:14<00:16, 7.16MB/s]
Downloading pytorch_model.bin:  84%|██████████▉  | 556M/663M [01:15<00:13, 7.97MB/s]
Downloading pytorch_model.bin:  85%|███████████  | 566M/663M [01:17<00:11, 8.27MB/s]
Downloading pytorch_model.bin:  87%|███████████▎ | 577M/663M [01:18<00:10, 8.35MB/s]
Downloading pytorch_model.bin:  89%|███████████▌ | 587M/663M [01:19<00:09, 7.96MB/s]
Downloading pytorch_model.bin:  90%|███████████▋ | 598M/663M [01:20<00:07, 8.30MB/s]
Downloading pytorch_model.bin:  92%|███████████▉ | 608M/663M [01:22<00:06, 8.48MB/s]
Downloading pytorch_model.bin:  93%|████████████▏| 619M/663M [01:23<00:05, 8.09MB/s]
Downloading pytorch_model.bin:  95%|████████████▎| 629M/663M [01:24<00:03, 9.22MB/s]
Downloading pytorch_model.bin:  97%|████████████▌| 640M/663M [01:24<00:02, 10.7MB/s]
Downloading pytorch_model.bin:  98%|████████████▊| 650M/663M [01:25<00:01, 11.7MB/s]
Downloading pytorch_model.bin: 100%|████████████▉| 661M/663M [01:26<00:00, 12.2MB/s]
Downloading pytorch_model.bin: 100%|█████████████| 663M/663M [01:26<00:00, 12.4MB/s]
Downloading pytorch_model.bin: 100%|█████████████| 663M/663M [01:26<00:00, 7.66MB/s]
2023-08-25 18:49:10 | ERROR | stderr | 
2023-08-25 18:49:10 | INFO | stdout | You are using config.init_device='cpu', but you can also use config.init_device="meta" with Composer + FSDP for fast initialization.
2023-08-25 18:49:12 | WARNING | accelerate.utils.modeling | The model weights are not tied. Please use the `tie_weights` method before using the `infer_auto_device` function.
Some weights of LlavaMPTForCausalLM were not initialized from the model checkpoint at facebook/opt-350m and are newly initialized: ['transformer.blocks.17.ffn.up_proj.bias', 'transformer.blocks.6.ffn.up_proj.bias', 'transformer.blocks.1.attn.Wqkv.weight', 'transformer.blocks.3.norm_2.weight', 'transformer.blocks.15.norm_2.bias', 'transformer.blocks.14.ffn.up_proj.weight', 'transformer.blocks.18.norm_2.weight', 'transformer.blocks.19.attn.Wqkv.weight', 'transformer.blocks.19.attn.Wqkv.bias', 'transformer.blocks.4.norm_1.weight', 'transformer.blocks.10.ffn.down_proj.bias', 'transformer.blocks.21.norm_1.weight', 'transformer.blocks.19.ffn.down_proj.bias', 'transformer.blocks.17.norm_2.bias', 'transformer.blocks.16.norm_1.bias', 'transformer.blocks.11.norm_1.bias', 'transformer.blocks.18.ffn.down_proj.weight', 'transformer.blocks.14.norm_1.weight', 'transformer.blocks.16.attn.out_proj.weight', 'transformer.blocks.3.norm_1.weight', 'transformer.blocks.21.ffn.down_proj.weight', 'transformer.blocks.3.norm_1.bias', 'transformer.blocks.23.norm_2.bias', 'transformer.blocks.17.ffn.up_proj.weight', 'transformer.blocks.18.attn.out_proj.weight', 'transformer.blocks.7.ffn.down_proj.bias', 'transformer.blocks.8.norm_2.weight', 'transformer.blocks.8.norm_2.bias', 'transformer.blocks.12.norm_1.weight', 'transformer.blocks.13.attn.out_proj.weight', 'transformer.blocks.0.ffn.up_proj.bias', 'transformer.blocks.14.ffn.up_proj.bias', 'transformer.blocks.0.ffn.down_proj.bias', 'transformer.blocks.4.attn.out_proj.weight', 'transformer.blocks.12.norm_2.weight', 'transformer.blocks.20.attn.out_proj.bias', 'transformer.blocks.18.attn.Wqkv.bias', 'transformer.blocks.20.attn.Wqkv.bias', 'transformer.blocks.5.norm_1.weight', 'transformer.blocks.17.norm_2.weight', 'transformer.blocks.2.ffn.up_proj.weight', 'transformer.blocks.14.ffn.down_proj.bias', 'transformer.blocks.11.ffn.up_proj.bias', 'transformer.blocks.15.attn.Wqkv.bias', 'transformer.blocks.22.attn.out_proj.bias', 'transformer.blocks.5.attn.Wqkv.bias', 'transformer.blocks.5.ffn.down_proj.bias', 'transformer.blocks.7.ffn.up_proj.bias', 'transformer.wte.weight', 'transformer.blocks.0.attn.out_proj.bias', 'transformer.blocks.8.attn.Wqkv.weight', 'transformer.blocks.6.ffn.down_proj.bias', 'transformer.blocks.7.norm_1.weight', 'transformer.blocks.12.attn.out_proj.weight', 'transformer.norm_f.weight', 'transformer.blocks.10.ffn.down_proj.weight', 'transformer.blocks.23.norm_2.weight', 'transformer.blocks.0.attn.Wqkv.weight', 'transformer.blocks.1.attn.Wqkv.bias', 'transformer.blocks.15.attn.out_proj.weight', 'transformer.blocks.12.ffn.down_proj.weight', 'transformer.blocks.17.ffn.down_proj.bias', 'transformer.blocks.0.attn.Wqkv.bias', 'transformer.blocks.18.ffn.up_proj.bias', 'transformer.blocks.6.norm_1.bias', 'transformer.blocks.0.norm_1.weight', 'transformer.blocks.1.norm_1.bias', 'transformer.blocks.8.ffn.down_proj.weight', 'transformer.blocks.6.ffn.up_proj.weight', 'transformer.blocks.14.ffn.down_proj.weight', 'transformer.blocks.3.attn.out_proj.bias', 'transformer.blocks.6.ffn.down_proj.weight', 'transformer.blocks.17.attn.Wqkv.bias', 'transformer.blocks.9.norm_1.bias', 'transformer.blocks.20.ffn.down_proj.weight', 'transformer.blocks.5.attn.Wqkv.weight', 'transformer.blocks.21.norm_2.bias', 'transformer.blocks.0.norm_2.weight', 'transformer.blocks.2.attn.Wqkv.weight', 'transformer.blocks.23.ffn.down_proj.weight', 'transformer.blocks.13.ffn.down_proj.weight', 'transformer.blocks.14.attn.Wqkv.weight', 'transformer.blocks.15.ffn.down_proj.weight', 'transformer.blocks.8.attn.Wqkv.bias', 'transformer.blocks.23.ffn.up_proj.bias', 'transformer.blocks.6.attn.Wqkv.bias', 'transformer.blocks.9.ffn.down_proj.weight', 'transformer.blocks.11.ffn.down_proj.bias', 'transformer.blocks.10.norm_2.weight', 'transformer.blocks.13.norm_1.weight', 'transformer.blocks.18.norm_1.bias', 'transformer.blocks.19.norm_2.bias', 'transformer.blocks.3.norm_2.bias', 'transformer.blocks.21.norm_1.bias', 'transformer.blocks.9.norm_2.weight', 'transformer.blocks.20.norm_1.bias', 'transformer.blocks.1.norm_2.bias', 'transformer.blocks.4.attn.Wqkv.bias', 'transformer.blocks.10.attn.out_proj.weight', 'transformer.blocks.18.norm_2.bias', 'transformer.blocks.22.norm_1.weight', 'transformer.blocks.11.attn.out_proj.bias', 'transformer.blocks.7.norm_2.weight', 'transformer.blocks.19.norm_2.weight', 'transformer.norm_f.bias', 'transformer.blocks.23.attn.Wqkv.weight', 'transformer.blocks.17.norm_1.bias', 'transformer.blocks.20.norm_2.bias', 'transformer.blocks.11.norm_1.weight', 'transformer.blocks.18.norm_1.weight', 'transformer.blocks.22.ffn.up_proj.bias', 'transformer.blocks.2.ffn.down_proj.bias', 'transformer.blocks.0.ffn.down_proj.weight', 'transformer.blocks.12.norm_2.bias', 'transformer.blocks.3.ffn.up_proj.bias', 'transformer.blocks.16.ffn.down_proj.weight', 'transformer.blocks.21.ffn.down_proj.bias', 'transformer.wpe.weight', 'transformer.blocks.4.ffn.down_proj.bias', 'transformer.blocks.15.ffn.down_proj.bias', 'transformer.blocks.19.norm_1.weight', 'transformer.blocks.6.attn.Wqkv.weight', 'transformer.blocks.1.ffn.up_proj.bias', 'transformer.blocks.2.norm_1.bias', 'transformer.blocks.13.attn.Wqkv.bias', 'transformer.blocks.15.attn.Wqkv.weight', 'transformer.blocks.22.ffn.up_proj.weight', 'transformer.blocks.10.attn.Wqkv.bias', 'transformer.blocks.13.ffn.up_proj.bias', 'transformer.blocks.10.norm_1.weight', 'transformer.blocks.9.ffn.up_proj.bias', 'transformer.blocks.15.norm_1.weight', 'transformer.blocks.18.attn.out_proj.bias', 'transformer.blocks.11.attn.Wqkv.weight', 'transformer.blocks.20.norm_1.weight', 'transformer.blocks.8.ffn.up_proj.bias', 'transformer.blocks.1.norm_1.weight', 'transformer.blocks.2.ffn.down_proj.weight', 'transformer.blocks.15.norm_2.weight', 'transformer.blocks.20.ffn.up_proj.bias', 'transformer.blocks.2.norm_2.weight', 'transformer.blocks.7.norm_2.bias', 'transformer.blocks.21.attn.Wqkv.bias', 'transformer.blocks.4.norm_2.weight', 'transformer.blocks.19.ffn.down_proj.weight', 'transformer.blocks.23.attn.out_proj.weight', 'transformer.blocks.22.norm_1.bias', 'transformer.blocks.4.ffn.down_proj.weight', 'transformer.blocks.1.ffn.up_proj.weight', 'transformer.blocks.6.norm_1.weight', 'transformer.blocks.5.ffn.up_proj.bias', 'transformer.blocks.9.attn.Wqkv.bias', 'transformer.blocks.21.attn.Wqkv.weight', 'transformer.blocks.22.attn.out_proj.weight', 'transformer.blocks.20.ffn.up_proj.weight', 'transformer.blocks.2.norm_2.bias', 'transformer.blocks.13.norm_2.bias', 'transformer.blocks.14.attn.out_proj.bias', 'transformer.blocks.23.ffn.up_proj.weight', 'transformer.blocks.11.ffn.up_proj.weight', 'transformer.blocks.19.norm_1.bias', 'transformer.blocks.19.ffn.up_proj.bias', 'transformer.blocks.21.ffn.up_proj.weight', 'transformer.blocks.15.ffn.up_proj.weight', 'transformer.blocks.19.attn.out_proj.weight', 'transformer.blocks.13.ffn.down_proj.bias', 'transformer.blocks.22.attn.Wqkv.weight', 'transformer.blocks.2.norm_1.weight', 'transformer.blocks.9.ffn.up_proj.weight', 'transformer.blocks.23.attn.Wqkv.bias', 'transformer.blocks.4.norm_2.bias', 'transformer.blocks.17.attn.out_proj.weight', 'transformer.blocks.8.norm_1.bias', 'transformer.blocks.12.attn.Wqkv.bias', 'transformer.blocks.19.attn.out_proj.bias', 'transformer.blocks.5.attn.out_proj.weight', 'transformer.blocks.6.attn.out_proj.bias', 'transformer.blocks.12.attn.Wqkv.weight', 'transformer.blocks.0.norm_1.bias', 'transformer.blocks.1.attn.out_proj.bias', 'transformer.blocks.2.ffn.up_proj.bias', 'transformer.blocks.7.attn.Wqkv.bias', 'transformer.blocks.16.attn.Wqkv.bias', 'transformer.blocks.16.norm_2.bias', 'transformer.blocks.10.norm_1.bias', 'transformer.blocks.10.attn.out_proj.bias', 'transformer.blocks.8.attn.out_proj.weight', 'transformer.blocks.3.attn.Wqkv.weight', 'transformer.blocks.13.norm_1.bias', 'transformer.blocks.16.norm_2.weight', 'transformer.blocks.23.norm_1.bias', 'transformer.blocks.8.ffn.down_proj.bias', 'transformer.blocks.22.ffn.down_proj.weight', 'transformer.blocks.7.attn.out_proj.bias', 'transformer.blocks.16.norm_1.weight', 'transformer.blocks.13.attn.out_proj.bias', 'transformer.blocks.4.ffn.up_proj.bias', 'transformer.blocks.16.attn.out_proj.bias', 'transformer.blocks.23.ffn.down_proj.bias', 'transformer.blocks.5.norm_2.bias', 'transformer.blocks.7.attn.Wqkv.weight', 'transformer.blocks.1.attn.out_proj.weight', 'transformer.blocks.4.attn.out_proj.bias', 'transformer.blocks.7.attn.out_proj.weight', 'transformer.blocks.11.attn.out_proj.weight', 'transformer.blocks.18.ffn.down_proj.bias', 'transformer.blocks.23.attn.out_proj.bias', 'transformer.blocks.23.norm_1.weight', 'transformer.blocks.3.ffn.down_proj.bias', 'transformer.blocks.16.attn.Wqkv.weight', 'transformer.blocks.20.norm_2.weight', 'transformer.blocks.16.ffn.down_proj.bias', 'transformer.blocks.3.ffn.up_proj.weight', 'transformer.blocks.7.ffn.down_proj.weight', 'transformer.blocks.5.ffn.up_proj.weight', 'transformer.blocks.20.attn.out_proj.weight', 'transformer.blocks.7.ffn.up_proj.weight', 'transformer.blocks.0.norm_2.bias', 'transformer.blocks.10.ffn.up_proj.bias', 'transformer.blocks.17.ffn.down_proj.weight', 'transformer.blocks.5.norm_1.bias', 'transformer.blocks.9.norm_1.weight', 'transformer.blocks.22.norm_2.bias', 'transformer.blocks.1.ffn.down_proj.weight', 'transformer.blocks.9.ffn.down_proj.bias', 'transformer.blocks.21.ffn.up_proj.bias', 'transformer.blocks.3.attn.out_proj.weight', 'transformer.blocks.8.ffn.up_proj.weight', 'transformer.blocks.2.attn.out_proj.bias', 'transformer.blocks.5.norm_2.weight', 'transformer.blocks.12.norm_1.bias', 'transformer.blocks.14.norm_2.bias', 'transformer.blocks.21.norm_2.weight', 'transformer.blocks.15.norm_1.bias', 'transformer.blocks.22.ffn.down_proj.bias', 'transformer.blocks.5.attn.out_proj.bias', 'transformer.blocks.9.norm_2.bias', 'transformer.blocks.20.ffn.down_proj.bias', 'transformer.blocks.21.attn.out_proj.bias', 'transformer.blocks.13.ffn.up_proj.weight', 'transformer.blocks.14.norm_2.weight', 'transformer.blocks.12.ffn.down_proj.bias', 'transformer.blocks.11.ffn.down_proj.weight', 'transformer.blocks.9.attn.out_proj.bias', 'transformer.blocks.4.attn.Wqkv.weight', 'transformer.blocks.4.ffn.up_proj.weight', 'transformer.blocks.12.ffn.up_proj.bias', 'transformer.blocks.3.attn.Wqkv.bias', 'transformer.blocks.16.ffn.up_proj.weight', 'transformer.blocks.10.ffn.up_proj.weight', 'transformer.blocks.13.attn.Wqkv.weight', 'transformer.blocks.15.attn.out_proj.bias', 'transformer.blocks.16.ffn.up_proj.bias', 'transformer.blocks.17.attn.out_proj.bias', 'transformer.blocks.11.attn.Wqkv.bias', 'transformer.blocks.1.norm_2.weight', 'transformer.blocks.5.ffn.down_proj.weight', 'transformer.blocks.11.norm_2.bias', 'transformer.blocks.12.attn.out_proj.bias', 'transformer.blocks.1.ffn.down_proj.bias', 'transformer.blocks.11.norm_2.weight', 'transformer.blocks.12.ffn.up_proj.weight', 'transformer.blocks.9.attn.out_proj.weight', 'transformer.blocks.14.attn.Wqkv.bias', 'transformer.blocks.18.attn.Wqkv.weight', 'transformer.blocks.0.attn.out_proj.weight', 'transformer.blocks.13.norm_2.weight', 'transformer.blocks.14.norm_1.bias', 'transformer.blocks.6.attn.out_proj.weight', 'transformer.blocks.8.attn.out_proj.bias', 'transformer.blocks.10.attn.Wqkv.weight', 'transformer.blocks.10.norm_2.bias', 'transformer.blocks.2.attn.out_proj.weight', 'transformer.blocks.15.ffn.up_proj.bias', 'transformer.blocks.9.attn.Wqkv.weight', 'transformer.blocks.17.norm_1.weight', 'transformer.blocks.17.attn.Wqkv.weight', 'transformer.blocks.6.norm_2.weight', 'transformer.blocks.21.attn.out_proj.weight', 'transformer.blocks.3.ffn.down_proj.weight', 'transformer.blocks.18.ffn.up_proj.weight', 'transformer.blocks.8.norm_1.weight', 'transformer.blocks.0.ffn.up_proj.weight', 'transformer.blocks.4.norm_1.bias', 'transformer.blocks.14.attn.out_proj.weight', 'transformer.blocks.6.norm_2.bias', 'transformer.blocks.20.attn.Wqkv.weight', 'transformer.blocks.22.norm_2.weight', 'transformer.blocks.22.attn.Wqkv.bias', 'transformer.blocks.7.norm_1.bias', 'transformer.blocks.2.attn.Wqkv.bias', 'transformer.blocks.19.ffn.up_proj.weight']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
Downloading (…)neration_config.json:   0%|                | 0.00/137 [00:00<?, ?B/s]
Downloading (…)neration_config.json: 100%|██████████| 137/137 [00:00<00:00, 371kB/s]
2023-08-25 18:49:12 | ERROR | stderr | 
2023-08-25 18:49:13 | ERROR | stderr | Traceback (most recent call last):
2023-08-25 18:49:13 | ERROR | stderr |   File "/home/USER/miniconda3/envs/llava/lib/python3.10/runpy.py", line 196, in _run_module_as_main
2023-08-25 18:49:13 | ERROR | stderr |     return _run_code(code, main_globals, None,
2023-08-25 18:49:13 | ERROR | stderr |   File "/home/USER/miniconda3/envs/llava/lib/python3.10/runpy.py", line 86, in _run_code
2023-08-25 18:49:13 | ERROR | stderr |     exec(code, run_globals)
2023-08-25 18:49:13 | ERROR | stderr |   File "/home/USER/github/LLaVa/LLaVA/llava/serve/model_worker.py", line 273, in <module>
2023-08-25 18:49:13 | ERROR | stderr |     worker = ModelWorker(args.controller_address,
2023-08-25 18:49:13 | ERROR | stderr |   File "/home/USER/github/LLaVa/LLaVA/llava/serve/model_worker.py", line 64, in __init__
2023-08-25 18:49:13 | ERROR | stderr |     self.tokenizer, self.model, self.image_processor, self.context_len = load_pretrained_model(
2023-08-25 18:49:13 | ERROR | stderr |   File "/home/USER/github/LLaVa/LLaVA/llava/model/builder.py", line 135, in load_pretrained_model
2023-08-25 18:49:13 | ERROR | stderr |     if not vision_tower.is_loaded:
2023-08-25 18:49:13 | ERROR | stderr | AttributeError: 'NoneType' object has no attribute 'is_loaded'

Screenshots: You may attach screenshots if it better explains the issue.

haotian-liu commented 1 year ago

Hi @OrienKastor

You should use --model-path instead of --model-name:

python -m llava.serve.model_worker --host 0.0.0.0 --controller http://localhost:10000 --port 40000 --worker http://localhost:40000 --model-path liuhaotian/LLaVA-Lightning-MPT-7B-preview --load-4bit

Please let me know if you have found the --model-name in any docs, or you find any instructions confusing. I will make the correction. Thanks.

nj159 commented 1 year ago

Sorry, I need your help. I ran this code on the third terminal according to your help and encountered the following error. May I ask what the reason is?thanks very much The error is as follows: (llava) root@nj11111:/opt/data/private/LLaVA# python -m llava.serve.model_worker --host 0.0.0.0 --controller http://localhost:7854 --port 40000 --worker http://localhost:40000 --model-path liuhaotian/LLaVA-Lightning-MPT-7B-preview --load-4bit [2023-10-06 05:57:02,164] [INFO] [real_accelerator.py:110:get_accelerator] Setting ds_accelerator to cuda (auto detect) 2023-10-06 05:57:02 | INFO | model_worker | args: Namespace(host='0.0.0.0', port=40000, worker_address='http://localhost:40000', controller_address='http://localhost:7854', model_path='liuhaotian/LLaVA-Lightning-MPT-7B-preview', model_base=None, model_name=None, multi_modal=False, limit_model_concurrency=5, stream_interval=1, no_register=False, load_8bit=False, load_4bit=True) 2023-10-06 05:57:02 | INFO | model_worker | Loading the model LLaVA-Lightning-MPT-7B-preview on worker 5f003e ... '(MaxRetryError("HTTPSConnectionPool(host='huggingface.co', port=443): Max retries exceeded with url: /liuhaotian/LLaVA-Lightning-MPT-7B-preview/resolve/main/tokenizer_config.json (Caused by ConnectTimeoutError(<urllib3.connection.HTTPSConnection object at 0x7fb25dc9d060>, 'Connection to huggingface.co timed out. (connect timeout=10)'))"), '(Request ID: d499cfab-5f68-4c7f-a3de-c4d344697b46)')' thrown while requesting HEAD https://huggingface.co/liuhaotian/LLaVA-Lightning-MPT-7B-preview/resolve/main/tokenizer_config.json 2023-10-06 05:57:12 | WARNING | huggingface_hub.utils._http | '(MaxRetryError("HTTPSConnectionPool(host='huggingface.co', port=443): Max retries exceeded with url: /liuhaotian/LLaVA-Lightning-MPT-7B-preview/resolve/main/tokenizer_config.json (Caused by ConnectTimeoutError(<urllib3.connection.HTTPSConnection object at 0x7fb25dc9d060>, 'Connection to huggingface.co timed out. (connect timeout=10)'))"), '(Request ID: d499cfab-5f68-4c7f-a3de-c4d344697b46)')' thrown while requesting HEAD https://huggingface.co/liuhaotian/LLaVA-Lightning-MPT-7B-preview/resolve/main/tokenizer_config.json '(MaxRetryError("HTTPSConnectionPool(host='huggingface.co', port=443): Max retries exceeded with url: /liuhaotian/LLaVA-Lightning-MPT-7B-preview/resolve/main/config.json (Caused by ConnectTimeoutError(<urllib3.connection.HTTPSConnection object at 0x7fb25dc9dff0>, 'Connection to huggingface.co timed out. (connect timeout=10)'))"), '(Request ID: 91066e16-2eee-41e3-9869-ba879cb12a91)')' thrown while requesting HEAD https://huggingface.co/liuhaotian/LLaVA-Lightning-MPT-7B-preview/resolve/main/config.json 2023-10-06 05:57:22 | WARNING | huggingface_hub.utils._http | '(MaxRetryError("HTTPSConnectionPool(host='huggingface.co', port=443): Max retries exceeded with url: /liuhaotian/LLaVA-Lightning-MPT-7B-preview/resolve/main/config.json (Caused by ConnectTimeoutError(<urllib3.connection.HTTPSConnection object at 0x7fb25dc9dff0>, 'Connection to huggingface.co timed out. (connect timeout=10)'))"), '(Request ID: 91066e16-2eee-41e3-9869-ba879cb12a91)')' thrown while requesting HEAD https://huggingface.co/liuhaotian/LLaVA-Lightning-MPT-7B-preview/resolve/main/config.json 2023-10-06 05:57:22 | ERROR | stderr | Traceback (most recent call last): 2023-10-06 05:57:22 | ERROR | stderr | File "/opt/conda/envs/llava/lib/python3.10/site-packages/urllib3/connection.py", line 203, in _new_conn 2023-10-06 05:57:22 | ERROR | stderr | sock = connection.create_connection( 2023-10-06 05:57:22 | ERROR | stderr | File "/opt/conda/envs/llava/lib/python3.10/site-packages/urllib3/util/connection.py", line 85, in create_connection 2023-10-06 05:57:22 | ERROR | stderr | raise err 2023-10-06 05:57:22 | ERROR | stderr | File "/opt/conda/envs/llava/lib/python3.10/site-packages/urllib3/util/connection.py", line 73, in create_connection 2023-10-06 05:57:22 | ERROR | stderr | sock.connect(sa) 2023-10-06 05:57:22 | ERROR | stderr | TimeoutError: timed out 2023-10-06 05:57:22 | ERROR | stderr | 2023-10-06 05:57:22 | ERROR | stderr | The above exception was the direct cause of the following exception: 2023-10-06 05:57:22 | ERROR | stderr | 2023-10-06 05:57:22 | ERROR | stderr | Traceback (most recent call last): 2023-10-06 05:57:22 | ERROR | stderr | File "/opt/conda/envs/llava/lib/python3.10/site-packages/urllib3/connectionpool.py", line 790, in urlopen 2023-10-06 05:57:22 | ERROR | stderr | response = self._make_request( 2023-10-06 05:57:22 | ERROR | stderr | File "/opt/conda/envs/llava/lib/python3.10/site-packages/urllib3/connectionpool.py", line 491, in _make_request 2023-10-06 05:57:22 | ERROR | stderr | raise new_e 2023-10-06 05:57:22 | ERROR | stderr | File "/opt/conda/envs/llava/lib/python3.10/site-packages/urllib3/connectionpool.py", line 467, in _make_request 2023-10-06 05:57:22 | ERROR | stderr | self._validate_conn(conn) 2023-10-06 05:57:22 | ERROR | stderr | File "/opt/conda/envs/llava/lib/python3.10/site-packages/urllib3/connectionpool.py", line 1092, in _validate_conn 2023-10-06 05:57:22 | ERROR | stderr | conn.connect() 2023-10-06 05:57:22 | ERROR | stderr | File "/opt/conda/envs/llava/lib/python3.10/site-packages/urllib3/connection.py", line 611, in connect 2023-10-06 05:57:22 | ERROR | stderr | self.sock = sock = self._new_conn() 2023-10-06 05:57:22 | ERROR | stderr | File "/opt/conda/envs/llava/lib/python3.10/site-packages/urllib3/connection.py", line 212, in _new_conn 2023-10-06 05:57:22 | ERROR | stderr | raise ConnectTimeoutError( 2023-10-06 05:57:22 | ERROR | stderr | urllib3.exceptions.ConnectTimeoutError: (<urllib3.connection.HTTPSConnection object at 0x7fb25dc9dff0>, 'Connection to huggingface.co timed out. (connect timeout=10)') 2023-10-06 05:57:22 | ERROR | stderr | 2023-10-06 05:57:22 | ERROR | stderr | The above exception was the direct cause of the following exception: 2023-10-06 05:57:22 | ERROR | stderr | 2023-10-06 05:57:22 | ERROR | stderr | Traceback (most recent call last): 2023-10-06 05:57:22 | ERROR | stderr | File "/opt/conda/envs/llava/lib/python3.10/site-packages/requests/adapters.py", line 486, in send 2023-10-06 05:57:22 | ERROR | stderr | resp = conn.urlopen( 2023-10-06 05:57:22 | ERROR | stderr | File "/opt/conda/envs/llava/lib/python3.10/site-packages/urllib3/connectionpool.py", line 844, in urlopen 2023-10-06 05:57:22 | ERROR | stderr | retries = retries.increment( 2023-10-06 05:57:22 | ERROR | stderr | File "/opt/conda/envs/llava/lib/python3.10/site-packages/urllib3/util/retry.py", line 515, in increment 2023-10-06 05:57:22 | ERROR | stderr | raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 2023-10-06 05:57:22 | ERROR | stderr | urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='huggingface.co', port=443): Max retries exceeded with url: /liuhaotian/LLaVA-Lightning-MPT-7B-preview/resolve/main/config.json (Caused by ConnectTimeoutError(<urllib3.connection.HTTPSConnection object at 0x7fb25dc9dff0>, 'Connection to huggingface.co timed out. (connect timeout=10)')) 2023-10-06 05:57:22 | ERROR | stderr | 2023-10-06 05:57:22 | ERROR | stderr | During handling of the above exception, another exception occurred: 2023-10-06 05:57:22 | ERROR | stderr | 2023-10-06 05:57:22 | ERROR | stderr | Traceback (most recent call last): 2023-10-06 05:57:22 | ERROR | stderr | File "/opt/conda/envs/llava/lib/python3.10/site-packages/huggingface_hub/file_download.py", line 1230, in hf_hub_download 2023-10-06 05:57:22 | ERROR | stderr | metadata = get_hf_file_metadata( 2023-10-06 05:57:22 | ERROR | stderr | File "/opt/conda/envs/llava/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py", line 118, in _inner_fn 2023-10-06 05:57:22 | ERROR | stderr | return fn(args, kwargs) 2023-10-06 05:57:22 | ERROR | stderr | File "/opt/conda/envs/llava/lib/python3.10/site-packages/huggingface_hub/file_download.py", line 1597, in get_hf_file_metadata 2023-10-06 05:57:22 | ERROR | stderr | r = _request_wrapper( 2023-10-06 05:57:22 | ERROR | stderr | File "/opt/conda/envs/llava/lib/python3.10/site-packages/huggingface_hub/file_download.py", line 417, in _request_wrapper 2023-10-06 05:57:22 | ERROR | stderr | response = _request_wrapper( 2023-10-06 05:57:22 | ERROR | stderr | File "/opt/conda/envs/llava/lib/python3.10/site-packages/huggingface_hub/file_download.py", line 452, in _request_wrapper 2023-10-06 05:57:22 | ERROR | stderr | return http_backoff( 2023-10-06 05:57:22 | ERROR | stderr | File "/opt/conda/envs/llava/lib/python3.10/site-packages/huggingface_hub/utils/_http.py", line 274, in http_backoff 2023-10-06 05:57:22 | ERROR | stderr | raise err 2023-10-06 05:57:22 | ERROR | stderr | File "/opt/conda/envs/llava/lib/python3.10/site-packages/huggingface_hub/utils/_http.py", line 258, in http_backoff 2023-10-06 05:57:22 | ERROR | stderr | response = session.request(method=method, url=url, kwargs) 2023-10-06 05:57:22 | ERROR | stderr | File "/opt/conda/envs/llava/lib/python3.10/site-packages/requests/sessions.py", line 589, in request 2023-10-06 05:57:22 | ERROR | stderr | resp = self.send(prep, send_kwargs) 2023-10-06 05:57:22 | ERROR | stderr | File "/opt/conda/envs/llava/lib/python3.10/site-packages/requests/sessions.py", line 703, in send 2023-10-06 05:57:22 | ERROR | stderr | r = adapter.send(request, kwargs) 2023-10-06 05:57:22 | ERROR | stderr | File "/opt/conda/envs/llava/lib/python3.10/site-packages/huggingface_hub/utils/_http.py", line 63, in send 2023-10-06 05:57:22 | ERROR | stderr | return super().send(request, args, *kwargs) 2023-10-06 05:57:22 | ERROR | stderr | File "/opt/conda/envs/llava/lib/python3.10/site-packages/requests/adapters.py", line 507, in send 2023-10-06 05:57:22 | ERROR | stderr | raise ConnectTimeout(e, request=request) 2023-10-06 05:57:22 | ERROR | stderr | requests.exceptions.ConnectTimeout: (MaxRetryError("HTTPSConnectionPool(host='huggingface.co', port=443): Max retries exceeded with url: /liuhaotian/LLaVA-Lightning-MPT-7B-preview/resolve/main/config.json (Caused by ConnectTimeoutError(<urllib3.connection.HTTPSConnection object at 0x7fb25dc9dff0>, 'Connection to huggingface.co timed out. (connect timeout=10)'))"), '(Request ID: 91066e16-2eee-41e3-9869-ba879cb12a91)') 2023-10-06 05:57:22 | ERROR | stderr | 2023-10-06 05:57:22 | ERROR | stderr | The above exception was the direct cause of the following exception: 2023-10-06 05:57:22 | ERROR | stderr | 2023-10-06 05:57:22 | ERROR | stderr | Traceback (most recent call last): 2023-10-06 05:57:22 | ERROR | stderr | File "/opt/conda/envs/llava/lib/python3.10/site-packages/transformers/utils/hub.py", line 417, in cached_file 2023-10-06 05:57:22 | ERROR | stderr | resolved_file = hf_hub_download( 2023-10-06 05:57:22 | ERROR | stderr | File "/opt/conda/envs/llava/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py", line 118, in _inner_fn 2023-10-06 05:57:22 | ERROR | stderr | return fn(args, kwargs) 2023-10-06 05:57:22 | ERROR | stderr | File "/opt/conda/envs/llava/lib/python3.10/site-packages/huggingface_hub/file_download.py", line 1347, in hf_hub_download 2023-10-06 05:57:22 | ERROR | stderr | raise LocalEntryNotFoundError( 2023-10-06 05:57:22 | ERROR | stderr | huggingface_hub.utils._errors.LocalEntryNotFoundError: An error happened while trying to locate the file on the Hub and we cannot find the requested files in the local cache. Please check your connection and try again or make sure your Internet connection is on. 2023-10-06 05:57:22 | ERROR | stderr | 2023-10-06 05:57:22 | ERROR | stderr | During handling of the above exception, another exception occurred: 2023-10-06 05:57:22 | ERROR | stderr | 2023-10-06 05:57:22 | ERROR | stderr | Traceback (most recent call last): 2023-10-06 05:57:22 | ERROR | stderr | File "/opt/conda/envs/llava/lib/python3.10/runpy.py", line 196, in _run_module_as_main 2023-10-06 05:57:22 | ERROR | stderr | return _run_code(code, main_globals, None, 2023-10-06 05:57:22 | ERROR | stderr | File "/opt/conda/envs/llava/lib/python3.10/runpy.py", line 86, in _run_code 2023-10-06 05:57:22 | ERROR | stderr | exec(code, run_globals) 2023-10-06 05:57:22 | ERROR | stderr | File "/opt/data/private/LLaVA/llava/serve/model_worker.py", line 273, in 2023-10-06 05:57:22 | ERROR | stderr | worker = ModelWorker(args.controller_address, 2023-10-06 05:57:22 | ERROR | stderr | File "/opt/data/private/LLaVA/llava/serve/model_worker.py", line 64, in init 2023-10-06 05:57:22 | ERROR | stderr | self.tokenizer, self.model, self.image_processor, self.context_len = load_pretrained_model( 2023-10-06 05:57:22 | ERROR | stderr | File "/opt/data/private/LLaVA/llava/model/builder.py", line 99, in load_pretrained_model 2023-10-06 05:57:22 | ERROR | stderr | tokenizer = AutoTokenizer.from_pretrained(model_path, use_fast=True) 2023-10-06 05:57:22 | ERROR | stderr | File "/opt/conda/envs/llava/lib/python3.10/site-packages/transformers/models/auto/tokenization_auto.py", line 667, in from_pretrained 2023-10-06 05:57:22 | ERROR | stderr | config = AutoConfig.from_pretrained( 2023-10-06 05:57:22 | ERROR | stderr | File "/opt/conda/envs/llava/lib/python3.10/site-packages/transformers/models/auto/configuration_auto.py", line 983, in from_pretrained 2023-10-06 05:57:22 | ERROR | stderr | config_dict, unused_kwargs = PretrainedConfig.get_config_dict(pretrained_model_name_or_path, kwargs) 2023-10-06 05:57:22 | ERROR | stderr | File "/opt/conda/envs/llava/lib/python3.10/site-packages/transformers/configuration_utils.py", line 617, in get_config_dict 2023-10-06 05:57:22 | ERROR | stderr | config_dict, kwargs = cls._get_config_dict(pretrained_model_name_or_path, **kwargs) 2023-10-06 05:57:22 | ERROR | stderr | File "/opt/conda/envs/llava/lib/python3.10/site-packages/transformers/configuration_utils.py", line 672, in _get_config_dict 2023-10-06 05:57:22 | ERROR | stderr | resolved_config_file = cached_file( 2023-10-06 05:57:22 | ERROR | stderr | File "/opt/conda/envs/llava/lib/python3.10/site-packages/transformers/utils/hub.py", line 452, in cached_file 2023-10-06 05:57:22 | ERROR | stderr | raise EnvironmentError( 2023-10-06 05:57:22 | ERROR | stderr | OSError: We couldn't connect to 'https://huggingface.co' to load this file, couldn't find it in the cached files and it looks like liuhaotian/LLaVA-Lightning-MPT-7B-preview is not the path to a directory containing a file named config.json. 2023-10-06 05:57:22 | ERROR | stderr | Checkout your internet connection or see how to run the library in offline mode at 'https://huggingface.co/docs/transformers/installation#offline-mode'.

478786359 commented 1 year ago

Sorry, I need your help. I ran this code on the third terminal according to your help and encountered the following error. May I ask what the reason is?thanks very much The error is as follows: (llava) root@nj11111:/opt/data/private/LLaVA# python -m llava.serve.model_worker --host 0.0.0.0 --controller http://localhost:7854 --port 40000 --worker http://localhost:40000 --model-path liuhaotian/LLaVA-Lightning-MPT-7B-preview --load-4bit [2023-10-06 05:57:02,164] [INFO] [real_accelerator.py:110:get_accelerator] Setting ds_accelerator to cuda (auto detect) 2023-10-06 05:57:02 | INFO | model_worker | args: Namespace(host='0.0.0.0', port=40000, worker_address='http://localhost:40000', controller_address='http://localhost:7854', model_path='liuhaotian/LLaVA-Lightning-MPT-7B-preview', model_base=None, model_name=None, multi_modal=False, limit_model_concurrency=5, stream_interval=1, no_register=False, load_8bit=False, load_4bit=True) 2023-10-06 05:57:02 | INFO | model_worker | Loading the model LLaVA-Lightning-MPT-7B-preview on worker 5f003e ... '(MaxRetryError("HTTPSConnectionPool(host='huggingface.co', port=443): Max retries exceeded with url: /liuhaotian/LLaVA-Lightning-MPT-7B-preview/resolve/main/tokenizer_config.json (Caused by ConnectTimeoutError(<urllib3.connection.HTTPSConnection object at 0x7fb25dc9d060>, 'Connection to huggingface.co timed out. (connect timeout=10)'))"), '(Request ID: d499cfab-5f68-4c7f-a3de-c4d344697b46)')' thrown while requesting HEAD https://huggingface.co/liuhaotian/LLaVA-Lightning-MPT-7B-preview/resolve/main/tokenizer_config.json 2023-10-06 05:57:12 | WARNING | huggingface_hub.utils._http | '(MaxRetryError("HTTPSConnectionPool(host='huggingface.co', port=443): Max retries exceeded with url: /liuhaotian/LLaVA-Lightning-MPT-7B-preview/resolve/main/tokenizer_config.json (Caused by ConnectTimeoutError(<urllib3.connection.HTTPSConnection object at 0x7fb25dc9d060>, 'Connection to huggingface.co timed out. (connect timeout=10)'))"), '(Request ID: d499cfab-5f68-4c7f-a3de-c4d344697b46)')' thrown while requesting HEAD https://huggingface.co/liuhaotian/LLaVA-Lightning-MPT-7B-preview/resolve/main/tokenizer_config.json '(MaxRetryError("HTTPSConnectionPool(host='huggingface.co', port=443): Max retries exceeded with url: /liuhaotian/LLaVA-Lightning-MPT-7B-preview/resolve/main/config.json (Caused by ConnectTimeoutError(<urllib3.connection.HTTPSConnection object at 0x7fb25dc9dff0>, 'Connection to huggingface.co timed out. (connect timeout=10)'))"), '(Request ID: 91066e16-2eee-41e3-9869-ba879cb12a91)')' thrown while requesting HEAD https://huggingface.co/liuhaotian/LLaVA-Lightning-MPT-7B-preview/resolve/main/config.json 2023-10-06 05:57:22 | WARNING | huggingface_hub.utils._http | '(MaxRetryError("HTTPSConnectionPool(host='huggingface.co', port=443): Max retries exceeded with url: /liuhaotian/LLaVA-Lightning-MPT-7B-preview/resolve/main/config.json (Caused by ConnectTimeoutError(<urllib3.connection.HTTPSConnection object at 0x7fb25dc9dff0>, 'Connection to huggingface.co timed out. (connect timeout=10)'))"), '(Request ID: 91066e16-2eee-41e3-9869-ba879cb12a91)')' thrown while requesting HEAD https://huggingface.co/liuhaotian/LLaVA-Lightning-MPT-7B-preview/resolve/main/config.json 2023-10-06 05:57:22 | ERROR | stderr | Traceback (most recent call last): 2023-10-06 05:57:22 | ERROR | stderr | File "/opt/conda/envs/llava/lib/python3.10/site-packages/urllib3/connection.py", line 203, in _new_conn 2023-10-06 05:57:22 | ERROR | stderr | sock = connection.create_connection( 2023-10-06 05:57:22 | ERROR | stderr | File "/opt/conda/envs/llava/lib/python3.10/site-packages/urllib3/util/connection.py", line 85, in create_connection 2023-10-06 05:57:22 | ERROR | stderr | raise err 2023-10-06 05:57:22 | ERROR | stderr | File "/opt/conda/envs/llava/lib/python3.10/site-packages/urllib3/util/connection.py", line 73, in create_connection 2023-10-06 05:57:22 | ERROR | stderr | sock.connect(sa) 2023-10-06 05:57:22 | ERROR | stderr | TimeoutError: timed out 2023-10-06 05:57:22 | ERROR | stderr | 2023-10-06 05:57:22 | ERROR | stderr | The above exception was the direct cause of the following exception: 2023-10-06 05:57:22 | ERROR | stderr | 2023-10-06 05:57:22 | ERROR | stderr | Traceback (most recent call last): 2023-10-06 05:57:22 | ERROR | stderr | File "/opt/conda/envs/llava/lib/python3.10/site-packages/urllib3/connectionpool.py", line 790, in urlopen 2023-10-06 05:57:22 | ERROR | stderr | response = self._make_request( 2023-10-06 05:57:22 | ERROR | stderr | File "/opt/conda/envs/llava/lib/python3.10/site-packages/urllib3/connectionpool.py", line 491, in _make_request 2023-10-06 05:57:22 | ERROR | stderr | raise new_e 2023-10-06 05:57:22 | ERROR | stderr | File "/opt/conda/envs/llava/lib/python3.10/site-packages/urllib3/connectionpool.py", line 467, in _make_request 2023-10-06 05:57:22 | ERROR | stderr | self._validate_conn(conn) 2023-10-06 05:57:22 | ERROR | stderr | File "/opt/conda/envs/llava/lib/python3.10/site-packages/urllib3/connectionpool.py", line 1092, in _validate_conn 2023-10-06 05:57:22 | ERROR | stderr | conn.connect() 2023-10-06 05:57:22 | ERROR | stderr | File "/opt/conda/envs/llava/lib/python3.10/site-packages/urllib3/connection.py", line 611, in connect 2023-10-06 05:57:22 | ERROR | stderr | self.sock = sock = self._new_conn() 2023-10-06 05:57:22 | ERROR | stderr | File "/opt/conda/envs/llava/lib/python3.10/site-packages/urllib3/connection.py", line 212, in _new_conn 2023-10-06 05:57:22 | ERROR | stderr | raise ConnectTimeoutError( 2023-10-06 05:57:22 | ERROR | stderr | urllib3.exceptions.ConnectTimeoutError: (<urllib3.connection.HTTPSConnection object at 0x7fb25dc9dff0>, 'Connection to huggingface.co timed out. (connect timeout=10)') 2023-10-06 05:57:22 | ERROR | stderr | 2023-10-06 05:57:22 | ERROR | stderr | The above exception was the direct cause of the following exception: 2023-10-06 05:57:22 | ERROR | stderr | 2023-10-06 05:57:22 | ERROR | stderr | Traceback (most recent call last): 2023-10-06 05:57:22 | ERROR | stderr | File "/opt/conda/envs/llava/lib/python3.10/site-packages/requests/adapters.py", line 486, in send 2023-10-06 05:57:22 | ERROR | stderr | resp = conn.urlopen( 2023-10-06 05:57:22 | ERROR | stderr | File "/opt/conda/envs/llava/lib/python3.10/site-packages/urllib3/connectionpool.py", line 844, in urlopen 2023-10-06 05:57:22 | ERROR | stderr | retries = retries.increment( 2023-10-06 05:57:22 | ERROR | stderr | File "/opt/conda/envs/llava/lib/python3.10/site-packages/urllib3/util/retry.py", line 515, in increment 2023-10-06 05:57:22 | ERROR | stderr | raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 2023-10-06 05:57:22 | ERROR | stderr | urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='huggingface.co', port=443): Max retries exceeded with url: /liuhaotian/LLaVA-Lightning-MPT-7B-preview/resolve/main/config.json (Caused by ConnectTimeoutError(<urllib3.connection.HTTPSConnection object at 0x7fb25dc9dff0>, 'Connection to huggingface.co timed out. (connect timeout=10)')) 2023-10-06 05:57:22 | ERROR | stderr | 2023-10-06 05:57:22 | ERROR | stderr | During handling of the above exception, another exception occurred: 2023-10-06 05:57:22 | ERROR | stderr | 2023-10-06 05:57:22 | ERROR | stderr | Traceback (most recent call last): 2023-10-06 05:57:22 | ERROR | stderr | File "/opt/conda/envs/llava/lib/python3.10/site-packages/huggingface_hub/file_download.py", line 1230, in hf_hub_download 2023-10-06 05:57:22 | ERROR | stderr | metadata = get_hf_file_metadata( 2023-10-06 05:57:22 | ERROR | stderr | File "/opt/conda/envs/llava/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py", line 118, in _inner_fn 2023-10-06 05:57:22 | ERROR | stderr | return fn(args, kwargs) 2023-10-06 05:57:22 | ERROR | stderr | File "/opt/conda/envs/llava/lib/python3.10/site-packages/huggingface_hub/file_download.py", line 1597, in get_hf_file_metadata 2023-10-06 05:57:22 | ERROR | stderr | r = _request_wrapper( 2023-10-06 05:57:22 | ERROR | stderr | File "/opt/conda/envs/llava/lib/python3.10/site-packages/huggingface_hub/file_download.py", line 417, in _request_wrapper 2023-10-06 05:57:22 | ERROR | stderr | response = _request_wrapper( 2023-10-06 05:57:22 | ERROR | stderr | File "/opt/conda/envs/llava/lib/python3.10/site-packages/huggingface_hub/file_download.py", line 452, in _request_wrapper 2023-10-06 05:57:22 | ERROR | stderr | return http_backoff( 2023-10-06 05:57:22 | ERROR | stderr | File "/opt/conda/envs/llava/lib/python3.10/site-packages/huggingface_hub/utils/_http.py", line 274, in http_backoff 2023-10-06 05:57:22 | ERROR | stderr | raise err 2023-10-06 05:57:22 | ERROR | stderr | File "/opt/conda/envs/llava/lib/python3.10/site-packages/huggingface_hub/utils/_http.py", line 258, in http_backoff 2023-10-06 05:57:22 | ERROR | stderr | response = session.request(method=method, url=url, kwargs) 2023-10-06 05:57:22 | ERROR | stderr | File "/opt/conda/envs/llava/lib/python3.10/site-packages/requests/sessions.py", line 589, in request 2023-10-06 05:57:22 | ERROR | stderr | resp = self.send(prep, send_kwargs) 2023-10-06 05:57:22 | ERROR | stderr | File "/opt/conda/envs/llava/lib/python3.10/site-packages/requests/sessions.py", line 703, in send 2023-10-06 05:57:22 | ERROR | stderr | r = adapter.send(request, kwargs) 2023-10-06 05:57:22 | ERROR | stderr | File "/opt/conda/envs/llava/lib/python3.10/site-packages/huggingface_hub/utils/_http.py", line 63, in send 2023-10-06 05:57:22 | ERROR | stderr | return super().send(request, args, *kwargs) 2023-10-06 05:57:22 | ERROR | stderr | File "/opt/conda/envs/llava/lib/python3.10/site-packages/requests/adapters.py", line 507, in send 2023-10-06 05:57:22 | ERROR | stderr | raise ConnectTimeout(e, request=request) 2023-10-06 05:57:22 | ERROR | stderr | requests.exceptions.ConnectTimeout: (MaxRetryError("HTTPSConnectionPool(host='huggingface.co', port=443): Max retries exceeded with url: /liuhaotian/LLaVA-Lightning-MPT-7B-preview/resolve/main/config.json (Caused by ConnectTimeoutError(<urllib3.connection.HTTPSConnection object at 0x7fb25dc9dff0>, 'Connection to huggingface.co timed out. (connect timeout=10)'))"), '(Request ID: 91066e16-2eee-41e3-9869-ba879cb12a91)') 2023-10-06 05:57:22 | ERROR | stderr | 2023-10-06 05:57:22 | ERROR | stderr | The above exception was the direct cause of the following exception: 2023-10-06 05:57:22 | ERROR | stderr | 2023-10-06 05:57:22 | ERROR | stderr | Traceback (most recent call last): 2023-10-06 05:57:22 | ERROR | stderr | File "/opt/conda/envs/llava/lib/python3.10/site-packages/transformers/utils/hub.py", line 417, in cached_file 2023-10-06 05:57:22 | ERROR | stderr | resolved_file = hf_hub_download( 2023-10-06 05:57:22 | ERROR | stderr | File "/opt/conda/envs/llava/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py", line 118, in _inner_fn 2023-10-06 05:57:22 | ERROR | stderr | return fn(args, kwargs) 2023-10-06 05:57:22 | ERROR | stderr | File "/opt/conda/envs/llava/lib/python3.10/site-packages/huggingface_hub/file_download.py", line 1347, in hf_hub_download 2023-10-06 05:57:22 | ERROR | stderr | raise LocalEntryNotFoundError( 2023-10-06 05:57:22 | ERROR | stderr | huggingface_hub.utils._errors.LocalEntryNotFoundError: An error happened while trying to locate the file on the Hub and we cannot find the requested files in the local cache. Please check your connection and try again or make sure your Internet connection is on. 2023-10-06 05:57:22 | ERROR | stderr | 2023-10-06 05:57:22 | ERROR | stderr | During handling of the above exception, another exception occurred: 2023-10-06 05:57:22 | ERROR | stderr | 2023-10-06 05:57:22 | ERROR | stderr | Traceback (most recent call last): 2023-10-06 05:57:22 | ERROR | stderr | File "/opt/conda/envs/llava/lib/python3.10/runpy.py", line 196, in _run_module_as_main 2023-10-06 05:57:22 | ERROR | stderr | return _run_code(code, main_globals, None, 2023-10-06 05:57:22 | ERROR | stderr | File "/opt/conda/envs/llava/lib/python3.10/runpy.py", line 86, in _run_code 2023-10-06 05:57:22 | ERROR | stderr | exec(code, run_globals) 2023-10-06 05:57:22 | ERROR | stderr | File "/opt/data/private/LLaVA/llava/serve/model_worker.py", line 273, in 2023-10-06 05:57:22 | ERROR | stderr | worker = ModelWorker(args.controller_address, 2023-10-06 05:57:22 | ERROR | stderr | File "/opt/data/private/LLaVA/llava/serve/model_worker.py", line 64, in init 2023-10-06 05:57:22 | ERROR | stderr | self.tokenizer, self.model, self.image_processor, self.context_len = load_pretrained_model( 2023-10-06 05:57:22 | ERROR | stderr | File "/opt/data/private/LLaVA/llava/model/builder.py", line 99, in load_pretrained_model 2023-10-06 05:57:22 | ERROR | stderr | tokenizer = AutoTokenizer.from_pretrained(model_path, use_fast=True) 2023-10-06 05:57:22 | ERROR | stderr | File "/opt/conda/envs/llava/lib/python3.10/site-packages/transformers/models/auto/tokenization_auto.py", line 667, in from_pretrained 2023-10-06 05:57:22 | ERROR | stderr | config = AutoConfig.from_pretrained( 2023-10-06 05:57:22 | ERROR | stderr | File "/opt/conda/envs/llava/lib/python3.10/site-packages/transformers/models/auto/configuration_auto.py", line 983, in from_pretrained 2023-10-06 05:57:22 | ERROR | stderr | config_dict, unused_kwargs = PretrainedConfig.get_config_dict(pretrained_model_name_or_path, kwargs) 2023-10-06 05:57:22 | ERROR | stderr | File "/opt/conda/envs/llava/lib/python3.10/site-packages/transformers/configuration_utils.py", line 617, in get_config_dict 2023-10-06 05:57:22 | ERROR | stderr | config_dict, kwargs = cls._get_config_dict(pretrained_model_name_or_path, **kwargs) 2023-10-06 05:57:22 | ERROR | stderr | File "/opt/conda/envs/llava/lib/python3.10/site-packages/transformers/configuration_utils.py", line 672, in _get_config_dict 2023-10-06 05:57:22 | ERROR | stderr | resolved_config_file = cached_file( 2023-10-06 05:57:22 | ERROR | stderr | File "/opt/conda/envs/llava/lib/python3.10/site-packages/transformers/utils/hub.py", line 452, in cached_file 2023-10-06 05:57:22 | ERROR | stderr | raise EnvironmentError( 2023-10-06 05:57:22 | ERROR | stderr | OSError: We couldn't connect to 'https://huggingface.co' to load this file, couldn't find it in the cached files and it looks like liuhaotian/LLaVA-Lightning-MPT-7B-preview is not the path to a directory containing a file named config.json. 2023-10-06 05:57:22 | ERROR | stderr | Checkout your internet connection or see how to run the library in offline mode at 'https://huggingface.co/docs/transformers/installation#offline-mode'.

"I encountered the same issue, did you resolve it?"

nj159 commented 1 year ago

Sorry, I need your help. I ran this code on the third terminal according to your help and encountered the following error. May I ask what the reason is?thanks very much The error is as follows: (llava) root@nj11111:/opt/data/private/LLaVA# python -m llava.serve.model_worker --host 0.0.0.0 --controller http://localhost:7854 --port 40000 --worker http://localhost:40000 --model-path liuhaotian/LLaVA-Lightning-MPT-7B-preview --load-4bit [2023-10-06 05:57:02,164] [INFO] [real_accelerator.py:110:get_accelerator] Setting ds_accelerator to cuda (auto detect) 2023-10-06 05:57:02 | INFO | model_worker | args: Namespace(host='0.0.0.0', port=40000, worker_address='http://localhost:40000', controller_address='http://localhost:7854', model_path='liuhaotian/LLaVA-Lightning-MPT-7B-preview', model_base=None, model_name=None, multi_modal=False, limit_model_concurrency=5, stream_interval=1, no_register=False, load_8bit=False, load_4bit=True) 2023-10-06 05:57:02 | INFO | model_worker | Loading the model LLaVA-Lightning-MPT-7B-preview on worker 5f003e ... '(MaxRetryError("HTTPSConnectionPool(host='huggingface.co', port=443): Max retries exceeded with url: /liuhaotian/LLaVA-Lightning-MPT-7B-preview/resolve/main/tokenizer_config.json (Caused by ConnectTimeoutError(<urllib3.connection.HTTPSConnection object at 0x7fb25dc9d060>, 'Connection to huggingface.co timed out. (connect timeout=10)'))"), '(Request ID: d499cfab-5f68-4c7f-a3de-c4d344697b46)')' thrown while requesting HEAD https://huggingface.co/liuhaotian/LLaVA-Lightning-MPT-7B-preview/resolve/main/tokenizer_config.json 2023-10-06 05:57:12 | WARNING | huggingface_hub.utils._http | '(MaxRetryError("HTTPSConnectionPool(host='huggingface.co', port=443): Max retries exceeded with url: /liuhaotian/LLaVA-Lightning-MPT-7B-preview/resolve/main/tokenizer_config.json (Caused by ConnectTimeoutError(<urllib3.connection.HTTPSConnection object at 0x7fb25dc9d060>, 'Connection to huggingface.co timed out. (connect timeout=10)'))"), '(Request ID: d499cfab-5f68-4c7f-a3de-c4d344697b46)')' thrown while requesting HEAD https://huggingface.co/liuhaotian/LLaVA-Lightning-MPT-7B-preview/resolve/main/tokenizer_config.json '(MaxRetryError("HTTPSConnectionPool(host='huggingface.co', port=443): Max retries exceeded with url: /liuhaotian/LLaVA-Lightning-MPT-7B-preview/resolve/main/config.json (Caused by ConnectTimeoutError(<urllib3.connection.HTTPSConnection object at 0x7fb25dc9dff0>, 'Connection to huggingface.co timed out. (connect timeout=10)'))"), '(Request ID: 91066e16-2eee-41e3-9869-ba879cb12a91)')' thrown while requesting HEAD https://huggingface.co/liuhaotian/LLaVA-Lightning-MPT-7B-preview/resolve/main/config.json 2023-10-06 05:57:22 | WARNING | huggingface_hub.utils._http | '(MaxRetryError("HTTPSConnectionPool(host='huggingface.co', port=443): Max retries exceeded with url: /liuhaotian/LLaVA-Lightning-MPT-7B-preview/resolve/main/config.json (Caused by ConnectTimeoutError(<urllib3.connection.HTTPSConnection object at 0x7fb25dc9dff0>, 'Connection to huggingface.co timed out. (connect timeout=10)'))"), '(Request ID: 91066e16-2eee-41e3-9869-ba879cb12a91)')' thrown while requesting HEAD https://huggingface.co/liuhaotian/LLaVA-Lightning-MPT-7B-preview/resolve/main/config.json 2023-10-06 05:57:22 | ERROR | stderr | Traceback (most recent call last): 2023-10-06 05:57:22 | ERROR | stderr | File "/opt/conda/envs/llava/lib/python3.10/site-packages/urllib3/connection.py", line 203, in _new_conn 2023-10-06 05:57:22 | ERROR | stderr | sock = connection.create_connection( 2023-10-06 05:57:22 | ERROR | stderr | File "/opt/conda/envs/llava/lib/python3.10/site-packages/urllib3/util/connection.py", line 85, in create_connection 2023-10-06 05:57:22 | ERROR | stderr | raise err 2023-10-06 05:57:22 | ERROR | stderr | File "/opt/conda/envs/llava/lib/python3.10/site-packages/urllib3/util/connection.py", line 73, in create_connection 2023-10-06 05:57:22 | ERROR | stderr | sock.connect(sa) 2023-10-06 05:57:22 | ERROR | stderr | TimeoutError: timed out 2023-10-06 05:57:22 | ERROR | stderr | 2023-10-06 05:57:22 | ERROR | stderr | The above exception was the direct cause of the following exception: 2023-10-06 05:57:22 | ERROR | stderr | 2023-10-06 05:57:22 | ERROR | stderr | Traceback (most recent call last): 2023-10-06 05:57:22 | ERROR | stderr | File "/opt/conda/envs/llava/lib/python3.10/site-packages/urllib3/connectionpool.py", line 790, in urlopen 2023-10-06 05:57:22 | ERROR | stderr | response = self._make_request( 2023-10-06 05:57:22 | ERROR | stderr | File "/opt/conda/envs/llava/lib/python3.10/site-packages/urllib3/connectionpool.py", line 491, in _make_request 2023-10-06 05:57:22 | ERROR | stderr | raise new_e 2023-10-06 05:57:22 | ERROR | stderr | File "/opt/conda/envs/llava/lib/python3.10/site-packages/urllib3/connectionpool.py", line 467, in _make_request 2023-10-06 05:57:22 | ERROR | stderr | self._validate_conn(conn) 2023-10-06 05:57:22 | ERROR | stderr | File "/opt/conda/envs/llava/lib/python3.10/site-packages/urllib3/connectionpool.py", line 1092, in _validate_conn 2023-10-06 05:57:22 | ERROR | stderr | conn.connect() 2023-10-06 05:57:22 | ERROR | stderr | File "/opt/conda/envs/llava/lib/python3.10/site-packages/urllib3/connection.py", line 611, in connect 2023-10-06 05:57:22 | ERROR | stderr | self.sock = sock = self._new_conn() 2023-10-06 05:57:22 | ERROR | stderr | File "/opt/conda/envs/llava/lib/python3.10/site-packages/urllib3/connection.py", line 212, in _new_conn 2023-10-06 05:57:22 | ERROR | stderr | raise ConnectTimeoutError( 2023-10-06 05:57:22 | ERROR | stderr | urllib3.exceptions.ConnectTimeoutError: (<urllib3.connection.HTTPSConnection object at 0x7fb25dc9dff0>, 'Connection to huggingface.co timed out. (connect timeout=10)') 2023-10-06 05:57:22 | ERROR | stderr | 2023-10-06 05:57:22 | ERROR | stderr | The above exception was the direct cause of the following exception: 2023-10-06 05:57:22 | ERROR | stderr | 2023-10-06 05:57:22 | ERROR | stderr | Traceback (most recent call last): 2023-10-06 05:57:22 | ERROR | stderr | File "/opt/conda/envs/llava/lib/python3.10/site-packages/requests/adapters.py", line 486, in send 2023-10-06 05:57:22 | ERROR | stderr | resp = conn.urlopen( 2023-10-06 05:57:22 | ERROR | stderr | File "/opt/conda/envs/llava/lib/python3.10/site-packages/urllib3/connectionpool.py", line 844, in urlopen 2023-10-06 05:57:22 | ERROR | stderr | retries = retries.increment( 2023-10-06 05:57:22 | ERROR | stderr | File "/opt/conda/envs/llava/lib/python3.10/site-packages/urllib3/util/retry.py", line 515, in increment 2023-10-06 05:57:22 | ERROR | stderr | raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 2023-10-06 05:57:22 | ERROR | stderr | urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='huggingface.co', port=443): Max retries exceeded with url: /liuhaotian/LLaVA-Lightning-MPT-7B-preview/resolve/main/config.json (Caused by ConnectTimeoutError(<urllib3.connection.HTTPSConnection object at 0x7fb25dc9dff0>, 'Connection to huggingface.co timed out. (connect timeout=10)')) 2023-10-06 05:57:22 | ERROR | stderr | 2023-10-06 05:57:22 | ERROR | stderr | During handling of the above exception, another exception occurred: 2023-10-06 05:57:22 | ERROR | stderr | 2023-10-06 05:57:22 | ERROR | stderr | Traceback (most recent call last): 2023-10-06 05:57:22 | ERROR | stderr | File "/opt/conda/envs/llava/lib/python3.10/site-packages/huggingface_hub/file_download.py", line 1230, in hf_hub_download 2023-10-06 05:57:22 | ERROR | stderr | metadata = get_hf_file_metadata( 2023-10-06 05:57:22 | ERROR | stderr | File "/opt/conda/envs/llava/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py", line 118, in _inner_fn 2023-10-06 05:57:22 | ERROR | stderr | return fn(args, kwargs) 2023-10-06 05:57:22 | ERROR | stderr | File "/opt/conda/envs/llava/lib/python3.10/site-packages/huggingface_hub/file_download.py", line 1597, in get_hf_file_metadata 2023-10-06 05:57:22 | ERROR | stderr | r = _request_wrapper( 2023-10-06 05:57:22 | ERROR | stderr | File "/opt/conda/envs/llava/lib/python3.10/site-packages/huggingface_hub/file_download.py", line 417, in _request_wrapper 2023-10-06 05:57:22 | ERROR | stderr | response = _request_wrapper( 2023-10-06 05:57:22 | ERROR | stderr | File "/opt/conda/envs/llava/lib/python3.10/site-packages/huggingface_hub/file_download.py", line 452, in _request_wrapper 2023-10-06 05:57:22 | ERROR | stderr | return http_backoff( 2023-10-06 05:57:22 | ERROR | stderr | File "/opt/conda/envs/llava/lib/python3.10/site-packages/huggingface_hub/utils/_http.py", line 274, in http_backoff 2023-10-06 05:57:22 | ERROR | stderr | raise err 2023-10-06 05:57:22 | ERROR | stderr | File "/opt/conda/envs/llava/lib/python3.10/site-packages/huggingface_hub/utils/_http.py", line 258, in http_backoff 2023-10-06 05:57:22 | ERROR | stderr | response = session.request(method=method, url=url, kwargs) 2023-10-06 05:57:22 | ERROR | stderr | File "/opt/conda/envs/llava/lib/python3.10/site-packages/requests/sessions.py", line 589, in request 2023-10-06 05:57:22 | ERROR | stderr | resp = self.send(prep, send_kwargs) 2023-10-06 05:57:22 | ERROR | stderr | File "/opt/conda/envs/llava/lib/python3.10/site-packages/requests/sessions.py", line 703, in send 2023-10-06 05:57:22 | ERROR | stderr | r = adapter.send(request, kwargs) 2023-10-06 05:57:22 | ERROR | stderr | File "/opt/conda/envs/llava/lib/python3.10/site-packages/huggingface_hub/utils/_http.py", line 63, in send 2023-10-06 05:57:22 | ERROR | stderr | return super().send(request, args, *kwargs) 2023-10-06 05:57:22 | ERROR | stderr | File "/opt/conda/envs/llava/lib/python3.10/site-packages/requests/adapters.py", line 507, in send 2023-10-06 05:57:22 | ERROR | stderr | raise ConnectTimeout(e, request=request) 2023-10-06 05:57:22 | ERROR | stderr | requests.exceptions.ConnectTimeout: (MaxRetryError("HTTPSConnectionPool(host='huggingface.co', port=443): Max retries exceeded with url: /liuhaotian/LLaVA-Lightning-MPT-7B-preview/resolve/main/config.json (Caused by ConnectTimeoutError(<urllib3.connection.HTTPSConnection object at 0x7fb25dc9dff0>, 'Connection to huggingface.co timed out. (connect timeout=10)'))"), '(Request ID: 91066e16-2eee-41e3-9869-ba879cb12a91)') 2023-10-06 05:57:22 | ERROR | stderr | 2023-10-06 05:57:22 | ERROR | stderr | The above exception was the direct cause of the following exception: 2023-10-06 05:57:22 | ERROR | stderr | 2023-10-06 05:57:22 | ERROR | stderr | Traceback (most recent call last): 2023-10-06 05:57:22 | ERROR | stderr | File "/opt/conda/envs/llava/lib/python3.10/site-packages/transformers/utils/hub.py", line 417, in cached_file 2023-10-06 05:57:22 | ERROR | stderr | resolved_file = hf_hub_download( 2023-10-06 05:57:22 | ERROR | stderr | File "/opt/conda/envs/llava/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py", line 118, in _inner_fn 2023-10-06 05:57:22 | ERROR | stderr | return fn(args, kwargs) 2023-10-06 05:57:22 | ERROR | stderr | File "/opt/conda/envs/llava/lib/python3.10/site-packages/huggingface_hub/file_download.py", line 1347, in hf_hub_download 2023-10-06 05:57:22 | ERROR | stderr | raise LocalEntryNotFoundError( 2023-10-06 05:57:22 | ERROR | stderr | huggingface_hub.utils._errors.LocalEntryNotFoundError: An error happened while trying to locate the file on the Hub and we cannot find the requested files in the local cache. Please check your connection and try again or make sure your Internet connection is on. 2023-10-06 05:57:22 | ERROR | stderr | 2023-10-06 05:57:22 | ERROR | stderr | During handling of the above exception, another exception occurred: 2023-10-06 05:57:22 | ERROR | stderr | 2023-10-06 05:57:22 | ERROR | stderr | Traceback (most recent call last): 2023-10-06 05:57:22 | ERROR | stderr | File "/opt/conda/envs/llava/lib/python3.10/runpy.py", line 196, in _run_module_as_main 2023-10-06 05:57:22 | ERROR | stderr | return _run_code(code, main_globals, None, 2023-10-06 05:57:22 | ERROR | stderr | File "/opt/conda/envs/llava/lib/python3.10/runpy.py", line 86, in _run_code 2023-10-06 05:57:22 | ERROR | stderr | exec(code, run_globals) 2023-10-06 05:57:22 | ERROR | stderr | File "/opt/data/private/LLaVA/llava/serve/model_worker.py", line 273, in 2023-10-06 05:57:22 | ERROR | stderr | worker = ModelWorker(args.controller_address, 2023-10-06 05:57:22 | ERROR | stderr | File "/opt/data/private/LLaVA/llava/serve/model_worker.py", line 64, in init 2023-10-06 05:57:22 | ERROR | stderr | self.tokenizer, self.model, self.image_processor, self.context_len = load_pretrained_model( 2023-10-06 05:57:22 | ERROR | stderr | File "/opt/data/private/LLaVA/llava/model/builder.py", line 99, in load_pretrained_model 2023-10-06 05:57:22 | ERROR | stderr | tokenizer = AutoTokenizer.from_pretrained(model_path, use_fast=True) 2023-10-06 05:57:22 | ERROR | stderr | File "/opt/conda/envs/llava/lib/python3.10/site-packages/transformers/models/auto/tokenization_auto.py", line 667, in from_pretrained 2023-10-06 05:57:22 | ERROR | stderr | config = AutoConfig.from_pretrained( 2023-10-06 05:57:22 | ERROR | stderr | File "/opt/conda/envs/llava/lib/python3.10/site-packages/transformers/models/auto/configuration_auto.py", line 983, in from_pretrained 2023-10-06 05:57:22 | ERROR | stderr | config_dict, unused_kwargs = PretrainedConfig.get_config_dict(pretrained_model_name_or_path, kwargs) 2023-10-06 05:57:22 | ERROR | stderr | File "/opt/conda/envs/llava/lib/python3.10/site-packages/transformers/configuration_utils.py", line 617, in get_config_dict 2023-10-06 05:57:22 | ERROR | stderr | config_dict, kwargs = cls._get_config_dict(pretrained_model_name_or_path, **kwargs) 2023-10-06 05:57:22 | ERROR | stderr | File "/opt/conda/envs/llava/lib/python3.10/site-packages/transformers/configuration_utils.py", line 672, in _get_config_dict 2023-10-06 05:57:22 | ERROR | stderr | resolved_config_file = cached_file( 2023-10-06 05:57:22 | ERROR | stderr | File "/opt/conda/envs/llava/lib/python3.10/site-packages/transformers/utils/hub.py", line 452, in cached_file 2023-10-06 05:57:22 | ERROR | stderr | raise EnvironmentError( 2023-10-06 05:57:22 | ERROR | stderr | OSError: We couldn't connect to 'https://huggingface.co' to load this file, couldn't find it in the cached files and it looks like liuhaotian/LLaVA-Lightning-MPT-7B-preview is not the path to a directory containing a file named config.json. 2023-10-06 05:57:22 | ERROR | stderr | Checkout your internet connection or see how to run the library in offline mode at 'https://huggingface.co/docs/transformers/installation#offline-mode'.

"I encountered the same issue, did you resolve it?"

Yes, I've solved it, you need to download both the model to be loaded and clip-336 to your local and then upload it to the corresponding directory on the server,if you can't connect the huggingface

shiyishiaa commented 10 months ago

Same problem. :( Cannot run the worker or CLI.

2023-11-21 15:00:54 | ERROR | stderr | Traceback (most recent call last):
2023-11-21 15:00:54 | ERROR | stderr |   File "/home/**/miniconda3/envs/llava/lib/                                                                                                                        python3.10/runpy.py", line 196, in _run_module_as_main
2023-11-21 15:00:54 | ERROR | stderr |     return _run_code(code, main_globals, None,
2023-11-21 15:00:54 | ERROR | stderr |   File "/home/**/miniconda3/envs/llava/lib/                                                                                                                        python3.10/runpy.py", line 86, in _run_code
2023-11-21 15:00:54 | ERROR | stderr |     exec(code, run_globals)
2023-11-21 15:00:54 | ERROR | stderr |   File "/mnt/sdb/home/**/vlm_test/LLaVA/lla                                                                                                                        va/serve/model_worker.py", line 275, in <module>
2023-11-21 15:00:54 | ERROR | stderr |     worker = ModelWorker(args.controller_addres                                                                                                                        s,
2023-11-21 15:00:54 | ERROR | stderr |   File "/mnt/sdb/home/**/vlm_test/LLaVA/lla                                                                                                                        va/serve/model_worker.py", line 65, in __init__
2023-11-21 15:00:54 | ERROR | stderr |     self.tokenizer, self.model, self.image_proc                                                                                                                        essor, self.context_len = load_pretrained_model(
2023-11-21 15:00:54 | ERROR | stderr |   File "/mnt/sdb/home/**/vlm_test/LLaVA/lla                                                                                                                        va/model/builder.py", line 161, in load_pretrained_model
2023-11-21 15:00:54 | ERROR | stderr |     if not vision_tower.is_loaded:
2023-11-21 15:00:54 | ERROR | stderr | AttributeError: 'NoneType' object has no attrib                                                                                                                        ute 'is_loaded'

And this is the command:

python -m llava.serve.model_worker --host 0.0.0.0 --controller http://localhost:10000 --port 40000 --worker http://localhost:40000 --model-path "/home/**/.cache/huggingface/hub/models--liuhaotian--llava-v1.5-7b"

Owing to the network issue, I cannot load the model online so I run it offline, and the checkpoint is downloaded from huggingface. I really need your help. :( @haotian-liu

I muted my name by ** out of privacy consideration. It is not a mistake.

zjyellow commented 5 months ago

Same problem. :( Cannot run the worker or CLI.

2023-11-21 15:00:54 | ERROR | stderr | Traceback (most recent call last):
2023-11-21 15:00:54 | ERROR | stderr |   File "/home/**/miniconda3/envs/llava/lib/                                                                                                                        python3.10/runpy.py", line 196, in _run_module_as_main
2023-11-21 15:00:54 | ERROR | stderr |     return _run_code(code, main_globals, None,
2023-11-21 15:00:54 | ERROR | stderr |   File "/home/**/miniconda3/envs/llava/lib/                                                                                                                        python3.10/runpy.py", line 86, in _run_code
2023-11-21 15:00:54 | ERROR | stderr |     exec(code, run_globals)
2023-11-21 15:00:54 | ERROR | stderr |   File "/mnt/sdb/home/**/vlm_test/LLaVA/lla                                                                                                                        va/serve/model_worker.py", line 275, in <module>
2023-11-21 15:00:54 | ERROR | stderr |     worker = ModelWorker(args.controller_addres                                                                                                                        s,
2023-11-21 15:00:54 | ERROR | stderr |   File "/mnt/sdb/home/**/vlm_test/LLaVA/lla                                                                                                                        va/serve/model_worker.py", line 65, in __init__
2023-11-21 15:00:54 | ERROR | stderr |     self.tokenizer, self.model, self.image_proc                                                                                                                        essor, self.context_len = load_pretrained_model(
2023-11-21 15:00:54 | ERROR | stderr |   File "/mnt/sdb/home/**/vlm_test/LLaVA/lla                                                                                                                        va/model/builder.py", line 161, in load_pretrained_model
2023-11-21 15:00:54 | ERROR | stderr |     if not vision_tower.is_loaded:
2023-11-21 15:00:54 | ERROR | stderr | AttributeError: 'NoneType' object has no attrib                                                                                                                        ute 'is_loaded'

And this is the command:

python -m llava.serve.model_worker --host 0.0.0.0 --controller http://localhost:10000 --port 40000 --worker http://localhost:40000 --model-path "/home/**/.cache/huggingface/hub/models--liuhaotian--llava-v1.5-7b"

Owing to the network issue, I cannot load the model online so I run it offline, and the checkpoint is downloaded from huggingface. I really need your help. :( @haotian-liu

I muted my name by ** out of privacy consideration. It is not a mistake.

same problem in liuhaotian/llava-v1.5-13b today, have you solved yet? when I changed to llava-v1.5-7b, it seems to download "mm_vision_tower":"openai/clip-vit-large-patch14-336" according to config.json from liuhaotian/llava-v1.5-7b.

but in liuhaotian/llava-v1.5-13b/config.json, there is not description about 'tower' , I thought I should change a older version...

Wuhan-Zhang commented 2 months ago

Same problem. :( Cannot run the worker or CLI.

2023-11-21 15:00:54 | ERROR | stderr | Traceback (most recent call last):
2023-11-21 15:00:54 | ERROR | stderr |   File "/home/**/miniconda3/envs/llava/lib/                                                                                                                        python3.10/runpy.py", line 196, in _run_module_as_main
2023-11-21 15:00:54 | ERROR | stderr |     return _run_code(code, main_globals, None,
2023-11-21 15:00:54 | ERROR | stderr |   File "/home/**/miniconda3/envs/llava/lib/                                                                                                                        python3.10/runpy.py", line 86, in _run_code
2023-11-21 15:00:54 | ERROR | stderr |     exec(code, run_globals)
2023-11-21 15:00:54 | ERROR | stderr |   File "/mnt/sdb/home/**/vlm_test/LLaVA/lla                                                                                                                        va/serve/model_worker.py", line 275, in <module>
2023-11-21 15:00:54 | ERROR | stderr |     worker = ModelWorker(args.controller_addres                                                                                                                        s,
2023-11-21 15:00:54 | ERROR | stderr |   File "/mnt/sdb/home/**/vlm_test/LLaVA/lla                                                                                                                        va/serve/model_worker.py", line 65, in __init__
2023-11-21 15:00:54 | ERROR | stderr |     self.tokenizer, self.model, self.image_proc                                                                                                                        essor, self.context_len = load_pretrained_model(
2023-11-21 15:00:54 | ERROR | stderr |   File "/mnt/sdb/home/**/vlm_test/LLaVA/lla                                                                                                                        va/model/builder.py", line 161, in load_pretrained_model
2023-11-21 15:00:54 | ERROR | stderr |     if not vision_tower.is_loaded:
2023-11-21 15:00:54 | ERROR | stderr | AttributeError: 'NoneType' object has no attrib                                                                                                                        ute 'is_loaded'

And this is the command:

python -m llava.serve.model_worker --host 0.0.0.0 --controller http://localhost:10000 --port 40000 --worker http://localhost:40000 --model-path "/home/**/.cache/huggingface/hub/models--liuhaotian--llava-v1.5-7b"

Owing to the network issue, I cannot load the model online so I run it offline, and the checkpoint is downloaded from huggingface. I really need your help. :( @haotian-liu

I muted my name by ** out of privacy consideration. It is not a mistake.

I have experienced the same problem

hoangducnhatminh commented 2 months ago

Have anyone solved this problems yet?