microsoft / LLaVA-Med

Large Language-and-Vision Assistant for Biomedicine, built towards multimodal GPT-4 level capabilities.
Other
1.28k stars 147 forks source link

[Usage] Network error when inferencing on Google Colab A100 #17

Open HaotianHuang opened 7 months ago

HaotianHuang commented 7 months ago

Hi there, trying to inference using gradio client but running into error reported as "NETWORK ERROR DUE TO HIGH TRAFFIC. PLEASE REGENERATE OR REFRESH THIS PAGE." when using sample image and text input (see below).

Primary traceback error seems to be

What I've tried

What I believe the problem is

image

Worker has trouble

/colab/LLaVA-Med# python -m llava.serve.model_worker --host 0.0.0.0 --controller http://localhost:10000 --port 40000 --worker http://localhost:40000 --model-path ./checkpoints/LLaVA-Med-7B --multi-modal
2023-11-15 04:59:16.962909: E tensorflow/compiler/xla/stream_executor/cuda/cuda_dnn.cc:9342] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
2023-11-15 04:59:16.962975: E tensorflow/compiler/xla/stream_executor/cuda/cuda_fft.cc:609] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
2023-11-15 04:59:16.963019: E tensorflow/compiler/xla/stream_executor/cuda/cuda_blas.cc:1518] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
2023-11-15 04:59:18.225892: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
2023-11-15 04:59:21 | INFO | model_worker | args: Namespace(host='0.0.0.0', port=40000, worker_address='http://localhost:40000', controller_address='http://localhost:10000', model_path='./checkpoints/LLaVA-Med-7B', model_name=None, multi_modal=True, keep_aspect_ratio=False, num_gpus=1, limit_model_concurrency=5, stream_interval=2, no_register=False)
2023-11-15 04:59:21 | WARNING | model_worker | Multimodal mode is automatically detected with model name, please make sure `llava` is included in the model path.
2023-11-15 04:59:21 | INFO | model_worker | Loading the model LLaVA-Med-7B on worker 96dcdd ...
You are using a model of type llama to instantiate a model of type llava. This is not supported for all configurations of models and can yield errors.
(…)t-large-patch14/resolve/main/config.json:   0%|                                            | 0.00/4.52k [00:00<?, ?B/s]
(…)t-large-patch14/resolve/main/config.json: 100%|███████████████████████████████████| 4.52k/4.52k [00:00<00:00, 15.2MB/s]
2023-11-15 04:59:25 | ERROR | stderr | 
model.safetensors:   0%|                                                                      | 0.00/1.71G [00:00<?, ?B/s]
model.safetensors:   1%|▊                                                             | 21.0M/1.71G [00:00<00:08, 209MB/s]
model.safetensors:   4%|██▎                                                           | 62.9M/1.71G [00:00<00:06, 263MB/s]
model.safetensors:   6%|███▍                                                          | 94.4M/1.71G [00:00<00:05, 271MB/s]
model.safetensors:   7%|████▋                                                          | 126M/1.71G [00:00<00:05, 270MB/s]
model.safetensors:   9%|█████▊                                                         | 157M/1.71G [00:00<00:05, 266MB/s]
model.safetensors:  11%|██████▉                                                        | 189M/1.71G [00:00<00:05, 267MB/s]
model.safetensors:  13%|████████                                                       | 220M/1.71G [00:00<00:06, 239MB/s]
model.safetensors:  15%|█████████▋                                                     | 262M/1.71G [00:00<00:05, 276MB/s]
model.safetensors:  18%|███████████▏                                                   | 304M/1.71G [00:01<00:04, 298MB/s]
model.safetensors:  20%|████████████▋                                                  | 346M/1.71G [00:01<00:04, 308MB/s]
model.safetensors:  23%|██████████████▎                                                | 388M/1.71G [00:01<00:03, 332MB/s]
model.safetensors:  25%|███████████████▊                                               | 430M/1.71G [00:01<00:03, 356MB/s]
model.safetensors:  28%|█████████████████▍                                             | 472M/1.71G [00:01<00:03, 365MB/s]
model.safetensors:  30%|██████████████████▉                                            | 514M/1.71G [00:01<00:04, 298MB/s]
model.safetensors:  32%|████████████████████▍                                          | 556M/1.71G [00:01<00:03, 303MB/s]
model.safetensors:  35%|██████████████████████                                         | 598M/1.71G [00:01<00:03, 321MB/s]
model.safetensors:  37%|███████████████████████▌                                       | 640M/1.71G [00:02<00:03, 315MB/s]
model.safetensors:  40%|█████████████████████████▍                                     | 692M/1.71G [00:02<00:02, 352MB/s]
model.safetensors:  43%|███████████████████████████                                    | 734M/1.71G [00:02<00:02, 363MB/s]
model.safetensors:  45%|████████████████████████████▌                                  | 776M/1.71G [00:02<00:03, 287MB/s]
model.safetensors:  48%|██████████████████████████████                                 | 818M/1.71G [00:02<00:02, 300MB/s]
model.safetensors:  50%|███████████████████████████████▋                               | 860M/1.71G [00:02<00:02, 291MB/s]
model.safetensors:  53%|█████████████████████████████████▏                             | 902M/1.71G [00:02<00:02, 312MB/s]
model.safetensors:  55%|██████████████████████████████████▊                            | 944M/1.71G [00:03<00:02, 337MB/s]
model.safetensors:  58%|████████████████████████████████████▎                          | 986M/1.71G [00:03<00:02, 335MB/s]
model.safetensors:  60%|█████████████████████████████████████▏                        | 1.03G/1.71G [00:03<00:02, 301MB/s]
model.safetensors:  63%|██████████████████████████████████████▊                       | 1.07G/1.71G [00:03<00:02, 284MB/s]
model.safetensors:  65%|████████████████████████████████████████▎                     | 1.11G/1.71G [00:03<00:02, 296MB/s]
model.safetensors:  67%|█████████████████████████████████████████▊                    | 1.15G/1.71G [00:03<00:01, 304MB/s]
model.safetensors:  69%|██████████████████████████████████████████▉                   | 1.18G/1.71G [00:03<00:01, 305MB/s]
model.safetensors:  72%|████████████████████████████████████████████▍                 | 1.23G/1.71G [00:04<00:01, 315MB/s]
model.safetensors:  74%|█████████████████████████████████████████████▉                | 1.27G/1.71G [00:04<00:01, 327MB/s]
model.safetensors:  77%|███████████████████████████████████████████████▌              | 1.31G/1.71G [00:04<00:01, 339MB/s]
model.safetensors:  79%|█████████████████████████████████████████████████             | 1.35G/1.71G [00:04<00:01, 328MB/s]
model.safetensors:  82%|██████████████████████████████████████████████████▌           | 1.39G/1.71G [00:04<00:00, 337MB/s]
model.safetensors:  84%|████████████████████████████████████████████████████          | 1.44G/1.71G [00:04<00:00, 347MB/s]
model.safetensors:  86%|█████████████████████████████████████████████████████▌        | 1.48G/1.71G [00:04<00:00, 358MB/s]
model.safetensors:  89%|███████████████████████████████████████████████████████       | 1.52G/1.71G [00:04<00:00, 368MB/s]
model.safetensors:  91%|████████████████████████████████████████████████████████▋     | 1.56G/1.71G [00:04<00:00, 378MB/s]
model.safetensors:  94%|██████████████████████████████████████████████████████████▏   | 1.60G/1.71G [00:05<00:00, 360MB/s]
model.safetensors:  97%|████████████████████████████████████████████████████████████  | 1.66G/1.71G [00:05<00:00, 380MB/s]
model.safetensors:  99%|█████████████████████████████████████████████████████████████▌| 1.70G/1.71G [00:05<00:00, 373MB/s]
model.safetensors: 100%|██████████████████████████████████████████████████████████████| 1.71G/1.71G [00:05<00:00, 320MB/s]
2023-11-15 04:59:31 | ERROR | stderr | 
Some weights of the model checkpoint at openai/clip-vit-large-patch14 were not used when initializing CLIPVisionModel: ['text_model.encoder.layers.3.self_attn.v_proj.bias', 'text_model.encoder.layers.1.layer_norm2.weight', 'text_model.encoder.layers.3.self_attn.out_proj.weight', 'text_model.encoder.layers.3.mlp.fc2.weight', 'text_model.encoder.layers.0.layer_norm2.bias', 'text_model.encoder.layers.1.self_attn.out_proj.bias', 'text_model.encoder.layers.1.layer_norm1.bias', 'text_model.encoder.layers.5.self_attn.out_proj.bias', 'text_model.encoder.layers.9.mlp.fc2.bias', 'text_model.encoder.layers.7.mlp.fc1.bias', 'text_model.encoder.layers.2.self_attn.k_proj.weight', 'text_model.encoder.layers.5.self_attn.v_proj.bias', 'text_model.encoder.layers.0.self_attn.k_proj.bias', 'text_model.encoder.layers.1.self_attn.v_proj.bias', 'text_model.encoder.layers.3.self_attn.q_proj.weight', 'text_model.encoder.layers.3.mlp.fc1.weight', 'text_model.encoder.layers.11.self_attn.k_proj.weight', 'text_model.encoder.layers.6.self_attn.out_proj.weight', 'text_model.encoder.layers.10.self_attn.out_proj.weight', 'visual_projection.weight', 'text_model.encoder.layers.2.self_attn.q_proj.weight', 'text_model.encoder.layers.2.self_attn.out_proj.weight', 'text_model.encoder.layers.2.mlp.fc2.bias', 'text_model.encoder.layers.5.layer_norm2.bias', 'text_model.encoder.layers.4.self_attn.v_proj.bias', 'text_model.embeddings.position_embedding.weight', 'text_model.encoder.layers.9.layer_norm1.weight', 'text_model.encoder.layers.2.mlp.fc2.weight', 'text_model.encoder.layers.7.layer_norm1.weight', 'text_model.encoder.layers.0.mlp.fc1.weight', 'text_model.encoder.layers.2.self_attn.k_proj.bias', 'text_model.encoder.layers.6.layer_norm1.bias', 'text_model.encoder.layers.9.self_attn.out_proj.bias', 'text_model.encoder.layers.6.self_attn.v_proj.weight', 'text_model.encoder.layers.5.mlp.fc1.weight', 'text_model.encoder.layers.7.mlp.fc2.weight', 'text_model.final_layer_norm.bias', 'text_model.encoder.layers.6.self_attn.q_proj.weight', 'text_model.encoder.layers.7.self_attn.k_proj.bias', 'text_model.encoder.layers.8.layer_norm2.weight', 'text_model.encoder.layers.9.self_attn.v_proj.bias', 'text_model.encoder.layers.7.layer_norm1.bias', 'text_model.encoder.layers.6.mlp.fc2.weight', 'text_model.encoder.layers.10.layer_norm1.weight', 'text_model.encoder.layers.5.self_attn.k_proj.bias', 'text_model.encoder.layers.6.layer_norm2.weight', 'text_model.encoder.layers.6.self_attn.k_proj.bias', 'text_model.encoder.layers.10.mlp.fc2.weight', 'text_model.encoder.layers.4.mlp.fc2.bias', 'text_model.encoder.layers.9.mlp.fc2.weight', 'text_model.encoder.layers.11.mlp.fc1.weight', 'text_model.encoder.layers.9.layer_norm2.bias', 'text_model.encoder.layers.11.self_attn.out_proj.weight', 'text_model.encoder.layers.4.layer_norm2.bias', 'text_model.encoder.layers.1.self_attn.q_proj.weight', 'text_model.encoder.layers.3.self_attn.q_proj.bias', 'text_model.encoder.layers.5.self_attn.v_proj.weight', 'text_model.encoder.layers.11.self_attn.k_proj.bias', 'text_model.encoder.layers.1.self_attn.k_proj.weight', 'text_model.encoder.layers.5.layer_norm2.weight', 'text_model.encoder.layers.6.mlp.fc1.bias', 'text_model.encoder.layers.4.self_attn.q_proj.weight', 'text_model.encoder.layers.9.layer_norm1.bias', 'text_model.encoder.layers.6.mlp.fc2.bias', 'text_model.encoder.layers.8.layer_norm2.bias', 'text_model.encoder.layers.9.mlp.fc1.weight', 'text_model.encoder.layers.0.self_attn.v_proj.bias', 'text_model.encoder.layers.8.mlp.fc1.bias', 'logit_scale', 'text_model.encoder.layers.11.self_attn.v_proj.weight', 'text_model.encoder.layers.6.self_attn.q_proj.bias', 'text_model.encoder.layers.5.layer_norm1.bias', 'text_model.encoder.layers.1.mlp.fc2.weight', 'text_model.encoder.layers.10.mlp.fc1.bias', 'text_model.encoder.layers.7.self_attn.out_proj.weight', 'text_model.encoder.layers.8.mlp.fc1.weight', 'text_model.encoder.layers.10.layer_norm2.bias', 'text_model.encoder.layers.8.self_attn.k_proj.bias', 'text_model.encoder.layers.5.mlp.fc2.bias', 'text_model.encoder.layers.6.mlp.fc1.weight', 'text_model.encoder.layers.10.mlp.fc2.bias', 'text_model.encoder.layers.2.self_attn.q_proj.bias', 'text_model.encoder.layers.9.layer_norm2.weight', 'text_model.encoder.layers.4.mlp.fc2.weight', 'text_model.encoder.layers.9.self_attn.q_proj.weight', 'text_model.encoder.layers.8.layer_norm1.bias', 'text_model.encoder.layers.6.self_attn.k_proj.weight', 'text_model.encoder.layers.9.self_attn.q_proj.bias', 'text_model.encoder.layers.11.layer_norm1.bias', 'text_model.encoder.layers.8.self_attn.q_proj.bias', 'text_model.embeddings.token_embedding.weight', 'text_model.encoder.layers.6.layer_norm1.weight', 'text_model.encoder.layers.7.layer_norm2.bias', 'text_model.encoder.layers.1.self_attn.k_proj.bias', 'text_model.encoder.layers.3.layer_norm1.bias', 'text_model.encoder.layers.3.mlp.fc2.bias', 'text_model.encoder.layers.1.mlp.fc1.bias', 'text_model.encoder.layers.10.self_attn.v_proj.weight', 'text_model.encoder.layers.8.self_attn.k_proj.weight', 'text_model.encoder.layers.8.mlp.fc2.bias', 'text_model.encoder.layers.5.mlp.fc2.weight', 'text_model.encoder.layers.3.layer_norm2.weight', 'text_model.encoder.layers.5.mlp.fc1.bias', 'text_model.final_layer_norm.weight', 'text_model.encoder.layers.10.self_attn.k_proj.weight', 'text_model.encoder.layers.3.self_attn.k_proj.bias', 'text_model.encoder.layers.9.self_attn.k_proj.bias', 'text_model.encoder.layers.11.mlp.fc1.bias', 'text_model.encoder.layers.5.layer_norm1.weight', 'text_projection.weight', 'text_model.encoder.layers.7.self_attn.out_proj.bias', 'text_model.encoder.layers.11.layer_norm2.bias', 'text_model.encoder.layers.0.layer_norm1.bias', 'text_model.encoder.layers.3.self_attn.out_proj.bias', 'text_model.encoder.layers.10.self_attn.q_proj.weight', 'text_model.encoder.layers.0.self_attn.q_proj.bias', 'text_model.encoder.layers.0.self_attn.k_proj.weight', 'text_model.encoder.layers.8.self_attn.out_proj.weight', 'text_model.encoder.layers.4.layer_norm2.weight', 'text_model.encoder.layers.8.mlp.fc2.weight', 'text_model.encoder.layers.10.self_attn.v_proj.bias', 'text_model.encoder.layers.4.layer_norm1.bias', 'text_model.encoder.layers.0.self_attn.out_proj.weight', 'text_model.encoder.layers.4.layer_norm1.weight', 'text_model.encoder.layers.4.mlp.fc1.weight', 'text_model.encoder.layers.4.self_attn.out_proj.bias', 'text_model.encoder.layers.1.self_attn.q_proj.bias', 'text_model.encoder.layers.9.mlp.fc1.bias', 'text_model.encoder.layers.4.self_attn.k_proj.weight', 'text_model.encoder.layers.2.layer_norm2.bias', 'text_model.encoder.layers.1.mlp.fc1.weight', 'text_model.encoder.layers.11.mlp.fc2.bias', 'text_model.encoder.layers.5.self_attn.q_proj.weight', 'text_model.encoder.layers.11.self_attn.v_proj.bias', 'text_model.encoder.layers.8.self_attn.out_proj.bias', 'text_model.encoder.layers.0.self_attn.v_proj.weight', 'text_model.encoder.layers.8.self_attn.v_proj.weight', 'text_model.encoder.layers.9.self_attn.v_proj.weight', 'text_model.embeddings.position_ids', 'text_model.encoder.layers.3.self_attn.v_proj.weight', 'text_model.encoder.layers.0.self_attn.q_proj.weight', 'text_model.encoder.layers.10.mlp.fc1.weight', 'text_model.encoder.layers.2.layer_norm1.bias', 'text_model.encoder.layers.5.self_attn.out_proj.weight', 'text_model.encoder.layers.6.layer_norm2.bias', 'text_model.encoder.layers.3.layer_norm1.weight', 'text_model.encoder.layers.11.layer_norm1.weight', 'text_model.encoder.layers.1.layer_norm1.weight', 'text_model.encoder.layers.3.layer_norm2.bias', 'text_model.encoder.layers.11.self_attn.q_proj.bias', 'text_model.encoder.layers.9.self_attn.out_proj.weight', 'text_model.encoder.layers.4.self_attn.out_proj.weight', 'text_model.encoder.layers.0.mlp.fc2.weight', 'text_model.encoder.layers.0.self_attn.out_proj.bias', 'text_model.encoder.layers.4.mlp.fc1.bias', 'text_model.encoder.layers.1.mlp.fc2.bias', 'text_model.encoder.layers.7.self_attn.v_proj.bias', 'text_model.encoder.layers.3.self_attn.k_proj.weight', 'text_model.encoder.layers.8.layer_norm1.weight', 'text_model.encoder.layers.7.layer_norm2.weight', 'text_model.encoder.layers.4.self_attn.q_proj.bias', 'text_model.encoder.layers.2.layer_norm1.weight', 'text_model.encoder.layers.0.mlp.fc2.bias', 'text_model.encoder.layers.9.self_attn.k_proj.weight', 'text_model.encoder.layers.7.mlp.fc1.weight', 'text_model.encoder.layers.2.self_attn.v_proj.bias', 'text_model.encoder.layers.2.mlp.fc1.weight', 'text_model.encoder.layers.7.self_attn.q_proj.weight', 'text_model.encoder.layers.7.mlp.fc2.bias', 'text_model.encoder.layers.11.self_attn.out_proj.bias', 'text_model.encoder.layers.1.self_attn.out_proj.weight', 'text_model.encoder.layers.10.self_attn.k_proj.bias', 'text_model.encoder.layers.8.self_attn.v_proj.bias', 'text_model.encoder.layers.10.layer_norm2.weight', 'text_model.encoder.layers.11.self_attn.q_proj.weight', 'text_model.encoder.layers.6.self_attn.out_proj.bias', 'text_model.encoder.layers.11.mlp.fc2.weight', 'text_model.encoder.layers.5.self_attn.q_proj.bias', 'text_model.encoder.layers.0.layer_norm1.weight', 'text_model.encoder.layers.2.self_attn.out_proj.bias', 'text_model.encoder.layers.1.layer_norm2.bias', 'text_model.encoder.layers.10.self_attn.q_proj.bias', 'text_model.encoder.layers.2.layer_norm2.weight', 'text_model.encoder.layers.4.self_attn.k_proj.bias', 'text_model.encoder.layers.10.layer_norm1.bias', 'text_model.encoder.layers.7.self_attn.k_proj.weight', 'text_model.encoder.layers.10.self_attn.out_proj.bias', 'text_model.encoder.layers.11.layer_norm2.weight', 'text_model.encoder.layers.4.self_attn.v_proj.weight', 'text_model.encoder.layers.2.self_attn.v_proj.weight', 'text_model.encoder.layers.5.self_attn.k_proj.weight', 'text_model.encoder.layers.8.self_attn.q_proj.weight', 'text_model.encoder.layers.0.mlp.fc1.bias', 'text_model.encoder.layers.0.layer_norm2.weight', 'text_model.encoder.layers.7.self_attn.q_proj.bias', 'text_model.encoder.layers.6.self_attn.v_proj.bias', 'text_model.encoder.layers.3.mlp.fc1.bias', 'text_model.encoder.layers.2.mlp.fc1.bias', 'text_model.encoder.layers.1.self_attn.v_proj.weight', 'text_model.encoder.layers.7.self_attn.v_proj.weight']
- This IS expected if you are initializing CLIPVisionModel from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing CLIPVisionModel from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
Loading checkpoint shards:   0%|                                                                    | 0/2 [00:00<?, ?it/s]
Loading checkpoint shards:  50%|█████████████████████████████▌                             | 1/2 [02:58<02:58, 178.49s/it]
Loading checkpoint shards: 100%|███████████████████████████████████████████████████████████| 2/2 [04:00<00:00, 109.80s/it]
Loading checkpoint shards: 100%|███████████████████████████████████████████████████████████| 2/2 [04:00<00:00, 120.10s/it]
2023-11-15 05:03:31 | ERROR | stderr | 
(…)14/resolve/main/preprocessor_config.json:   0%|                                              | 0.00/316 [00:00<?, ?B/s]
(…)14/resolve/main/preprocessor_config.json: 100%|███████████████████████████████████████| 316/316 [00:00<00:00, 1.53MB/s]
2023-11-15 05:03:33 | ERROR | stderr | 
Some weights of the model checkpoint at openai/clip-vit-large-patch14 were not used when initializing CLIPVisionModel: ['text_model.encoder.layers.3.self_attn.v_proj.bias', 'text_model.encoder.layers.1.layer_norm2.weight', 'text_model.encoder.layers.3.self_attn.out_proj.weight', 'text_model.encoder.layers.3.mlp.fc2.weight', 'text_model.encoder.layers.0.layer_norm2.bias', 'text_model.encoder.layers.1.self_attn.out_proj.bias', 'text_model.encoder.layers.1.layer_norm1.bias', 'text_model.encoder.layers.5.self_attn.out_proj.bias', 'text_model.encoder.layers.9.mlp.fc2.bias', 'text_model.encoder.layers.7.mlp.fc1.bias', 'text_model.encoder.layers.2.self_attn.k_proj.weight', 'text_model.encoder.layers.5.self_attn.v_proj.bias', 'text_model.encoder.layers.0.self_attn.k_proj.bias', 'text_model.encoder.layers.1.self_attn.v_proj.bias', 'text_model.encoder.layers.3.self_attn.q_proj.weight', 'text_model.encoder.layers.3.mlp.fc1.weight', 'text_model.encoder.layers.11.self_attn.k_proj.weight', 'text_model.encoder.layers.6.self_attn.out_proj.weight', 'text_model.encoder.layers.10.self_attn.out_proj.weight', 'visual_projection.weight', 'text_model.encoder.layers.2.self_attn.q_proj.weight', 'text_model.encoder.layers.2.self_attn.out_proj.weight', 'text_model.encoder.layers.2.mlp.fc2.bias', 'text_model.encoder.layers.5.layer_norm2.bias', 'text_model.encoder.layers.4.self_attn.v_proj.bias', 'text_model.embeddings.position_embedding.weight', 'text_model.encoder.layers.9.layer_norm1.weight', 'text_model.encoder.layers.2.mlp.fc2.weight', 'text_model.encoder.layers.7.layer_norm1.weight', 'text_model.encoder.layers.0.mlp.fc1.weight', 'text_model.encoder.layers.2.self_attn.k_proj.bias', 'text_model.encoder.layers.6.layer_norm1.bias', 'text_model.encoder.layers.9.self_attn.out_proj.bias', 'text_model.encoder.layers.6.self_attn.v_proj.weight', 'text_model.encoder.layers.5.mlp.fc1.weight', 'text_model.encoder.layers.7.mlp.fc2.weight', 'text_model.final_layer_norm.bias', 'text_model.encoder.layers.6.self_attn.q_proj.weight', 'text_model.encoder.layers.7.self_attn.k_proj.bias', 'text_model.encoder.layers.8.layer_norm2.weight', 'text_model.encoder.layers.9.self_attn.v_proj.bias', 'text_model.encoder.layers.7.layer_norm1.bias', 'text_model.encoder.layers.6.mlp.fc2.weight', 'text_model.encoder.layers.10.layer_norm1.weight', 'text_model.encoder.layers.5.self_attn.k_proj.bias', 'text_model.encoder.layers.6.layer_norm2.weight', 'text_model.encoder.layers.6.self_attn.k_proj.bias', 'text_model.encoder.layers.10.mlp.fc2.weight', 'text_model.encoder.layers.4.mlp.fc2.bias', 'text_model.encoder.layers.9.mlp.fc2.weight', 'text_model.encoder.layers.11.mlp.fc1.weight', 'text_model.encoder.layers.9.layer_norm2.bias', 'text_model.encoder.layers.11.self_attn.out_proj.weight', 'text_model.encoder.layers.4.layer_norm2.bias', 'text_model.encoder.layers.1.self_attn.q_proj.weight', 'text_model.encoder.layers.3.self_attn.q_proj.bias', 'text_model.encoder.layers.5.self_attn.v_proj.weight', 'text_model.encoder.layers.11.self_attn.k_proj.bias', 'text_model.encoder.layers.1.self_attn.k_proj.weight', 'text_model.encoder.layers.5.layer_norm2.weight', 'text_model.encoder.layers.6.mlp.fc1.bias', 'text_model.encoder.layers.4.self_attn.q_proj.weight', 'text_model.encoder.layers.9.layer_norm1.bias', 'text_model.encoder.layers.6.mlp.fc2.bias', 'text_model.encoder.layers.8.layer_norm2.bias', 'text_model.encoder.layers.9.mlp.fc1.weight', 'text_model.encoder.layers.0.self_attn.v_proj.bias', 'text_model.encoder.layers.8.mlp.fc1.bias', 'logit_scale', 'text_model.encoder.layers.11.self_attn.v_proj.weight', 'text_model.encoder.layers.6.self_attn.q_proj.bias', 'text_model.encoder.layers.5.layer_norm1.bias', 'text_model.encoder.layers.1.mlp.fc2.weight', 'text_model.encoder.layers.10.mlp.fc1.bias', 'text_model.encoder.layers.7.self_attn.out_proj.weight', 'text_model.encoder.layers.8.mlp.fc1.weight', 'text_model.encoder.layers.10.layer_norm2.bias', 'text_model.encoder.layers.8.self_attn.k_proj.bias', 'text_model.encoder.layers.5.mlp.fc2.bias', 'text_model.encoder.layers.6.mlp.fc1.weight', 'text_model.encoder.layers.10.mlp.fc2.bias', 'text_model.encoder.layers.2.self_attn.q_proj.bias', 'text_model.encoder.layers.9.layer_norm2.weight', 'text_model.encoder.layers.4.mlp.fc2.weight', 'text_model.encoder.layers.9.self_attn.q_proj.weight', 'text_model.encoder.layers.8.layer_norm1.bias', 'text_model.encoder.layers.6.self_attn.k_proj.weight', 'text_model.encoder.layers.9.self_attn.q_proj.bias', 'text_model.encoder.layers.11.layer_norm1.bias', 'text_model.encoder.layers.8.self_attn.q_proj.bias', 'text_model.embeddings.token_embedding.weight', 'text_model.encoder.layers.6.layer_norm1.weight', 'text_model.encoder.layers.7.layer_norm2.bias', 'text_model.encoder.layers.1.self_attn.k_proj.bias', 'text_model.encoder.layers.3.layer_norm1.bias', 'text_model.encoder.layers.3.mlp.fc2.bias', 'text_model.encoder.layers.1.mlp.fc1.bias', 'text_model.encoder.layers.10.self_attn.v_proj.weight', 'text_model.encoder.layers.8.self_attn.k_proj.weight', 'text_model.encoder.layers.8.mlp.fc2.bias', 'text_model.encoder.layers.5.mlp.fc2.weight', 'text_model.encoder.layers.3.layer_norm2.weight', 'text_model.encoder.layers.5.mlp.fc1.bias', 'text_model.final_layer_norm.weight', 'text_model.encoder.layers.10.self_attn.k_proj.weight', 'text_model.encoder.layers.3.self_attn.k_proj.bias', 'text_model.encoder.layers.9.self_attn.k_proj.bias', 'text_model.encoder.layers.11.mlp.fc1.bias', 'text_model.encoder.layers.5.layer_norm1.weight', 'text_projection.weight', 'text_model.encoder.layers.7.self_attn.out_proj.bias', 'text_model.encoder.layers.11.layer_norm2.bias', 'text_model.encoder.layers.0.layer_norm1.bias', 'text_model.encoder.layers.3.self_attn.out_proj.bias', 'text_model.encoder.layers.10.self_attn.q_proj.weight', 'text_model.encoder.layers.0.self_attn.q_proj.bias', 'text_model.encoder.layers.0.self_attn.k_proj.weight', 'text_model.encoder.layers.8.self_attn.out_proj.weight', 'text_model.encoder.layers.4.layer_norm2.weight', 'text_model.encoder.layers.8.mlp.fc2.weight', 'text_model.encoder.layers.10.self_attn.v_proj.bias', 'text_model.encoder.layers.4.layer_norm1.bias', 'text_model.encoder.layers.0.self_attn.out_proj.weight', 'text_model.encoder.layers.4.layer_norm1.weight', 'text_model.encoder.layers.4.mlp.fc1.weight', 'text_model.encoder.layers.4.self_attn.out_proj.bias', 'text_model.encoder.layers.1.self_attn.q_proj.bias', 'text_model.encoder.layers.9.mlp.fc1.bias', 'text_model.encoder.layers.4.self_attn.k_proj.weight', 'text_model.encoder.layers.2.layer_norm2.bias', 'text_model.encoder.layers.1.mlp.fc1.weight', 'text_model.encoder.layers.11.mlp.fc2.bias', 'text_model.encoder.layers.5.self_attn.q_proj.weight', 'text_model.encoder.layers.11.self_attn.v_proj.bias', 'text_model.encoder.layers.8.self_attn.out_proj.bias', 'text_model.encoder.layers.0.self_attn.v_proj.weight', 'text_model.encoder.layers.8.self_attn.v_proj.weight', 'text_model.encoder.layers.9.self_attn.v_proj.weight', 'text_model.embeddings.position_ids', 'text_model.encoder.layers.3.self_attn.v_proj.weight', 'text_model.encoder.layers.0.self_attn.q_proj.weight', 'text_model.encoder.layers.10.mlp.fc1.weight', 'text_model.encoder.layers.2.layer_norm1.bias', 'text_model.encoder.layers.5.self_attn.out_proj.weight', 'text_model.encoder.layers.6.layer_norm2.bias', 'text_model.encoder.layers.3.layer_norm1.weight', 'text_model.encoder.layers.11.layer_norm1.weight', 'text_model.encoder.layers.1.layer_norm1.weight', 'text_model.encoder.layers.3.layer_norm2.bias', 'text_model.encoder.layers.11.self_attn.q_proj.bias', 'text_model.encoder.layers.9.self_attn.out_proj.weight', 'text_model.encoder.layers.4.self_attn.out_proj.weight', 'text_model.encoder.layers.0.mlp.fc2.weight', 'text_model.encoder.layers.0.self_attn.out_proj.bias', 'text_model.encoder.layers.4.mlp.fc1.bias', 'text_model.encoder.layers.1.mlp.fc2.bias', 'text_model.encoder.layers.7.self_attn.v_proj.bias', 'text_model.encoder.layers.3.self_attn.k_proj.weight', 'text_model.encoder.layers.8.layer_norm1.weight', 'text_model.encoder.layers.7.layer_norm2.weight', 'text_model.encoder.layers.4.self_attn.q_proj.bias', 'text_model.encoder.layers.2.layer_norm1.weight', 'text_model.encoder.layers.0.mlp.fc2.bias', 'text_model.encoder.layers.9.self_attn.k_proj.weight', 'text_model.encoder.layers.7.mlp.fc1.weight', 'text_model.encoder.layers.2.self_attn.v_proj.bias', 'text_model.encoder.layers.2.mlp.fc1.weight', 'text_model.encoder.layers.7.self_attn.q_proj.weight', 'text_model.encoder.layers.7.mlp.fc2.bias', 'text_model.encoder.layers.11.self_attn.out_proj.bias', 'text_model.encoder.layers.1.self_attn.out_proj.weight', 'text_model.encoder.layers.10.self_attn.k_proj.bias', 'text_model.encoder.layers.8.self_attn.v_proj.bias', 'text_model.encoder.layers.10.layer_norm2.weight', 'text_model.encoder.layers.11.self_attn.q_proj.weight', 'text_model.encoder.layers.6.self_attn.out_proj.bias', 'text_model.encoder.layers.11.mlp.fc2.weight', 'text_model.encoder.layers.5.self_attn.q_proj.bias', 'text_model.encoder.layers.0.layer_norm1.weight', 'text_model.encoder.layers.2.self_attn.out_proj.bias', 'text_model.encoder.layers.1.layer_norm2.bias', 'text_model.encoder.layers.10.self_attn.q_proj.bias', 'text_model.encoder.layers.2.layer_norm2.weight', 'text_model.encoder.layers.4.self_attn.k_proj.bias', 'text_model.encoder.layers.10.layer_norm1.bias', 'text_model.encoder.layers.7.self_attn.k_proj.weight', 'text_model.encoder.layers.10.self_attn.out_proj.bias', 'text_model.encoder.layers.11.layer_norm2.weight', 'text_model.encoder.layers.4.self_attn.v_proj.weight', 'text_model.encoder.layers.2.self_attn.v_proj.weight', 'text_model.encoder.layers.5.self_attn.k_proj.weight', 'text_model.encoder.layers.8.self_attn.q_proj.weight', 'text_model.encoder.layers.0.mlp.fc1.bias', 'text_model.encoder.layers.0.layer_norm2.weight', 'text_model.encoder.layers.7.self_attn.q_proj.bias', 'text_model.encoder.layers.6.self_attn.v_proj.bias', 'text_model.encoder.layers.3.mlp.fc1.bias', 'text_model.encoder.layers.2.mlp.fc1.bias', 'text_model.encoder.layers.1.self_attn.v_proj.weight', 'text_model.encoder.layers.7.self_attn.v_proj.weight']
- This IS expected if you are initializing CLIPVisionModel from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing CLIPVisionModel from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
2023-11-15 05:03:42 | INFO | model_worker | Register to controller
2023-11-15 05:03:42 | ERROR | stderr | INFO:     Started server process [9190]
2023-11-15 05:03:42 | ERROR | stderr | INFO:     Waiting for application startup.
2023-11-15 05:03:42 | ERROR | stderr | INFO:     Application startup complete.
2023-11-15 05:03:42 | ERROR | stderr | INFO:     Uvicorn running on http://0.0.0.0:40000 (Press CTRL+C to quit)
2023-11-15 05:03:57 | INFO | model_worker | Send heart beat. Models: ['LLaVA-Med-7B']. Semaphore: None. global_counter: 0
2023-11-15 05:04:03 | INFO | stdout | INFO:     127.0.0.1:46588 - "POST /worker_get_status HTTP/1.1" 200 OK
2023-11-15 05:04:03 | INFO | model_worker | Send heart beat. Models: ['LLaVA-Med-7B']. Semaphore: Semaphore(value=4, locked=False). global_counter: 1
2023-11-15 05:04:03 | INFO | stdout | INFO:     127.0.0.1:46604 - "POST /worker_generate_stream HTTP/1.1" 200 OK
2023-11-15 05:04:06 | INFO | model_worker | Send heart beat. Models: ['LLaVA-Med-7B']. Semaphore: Semaphore(value=5, locked=False). global_counter: 1
2023-11-15 05:04:12 | INFO | model_worker | Send heart beat. Models: ['LLaVA-Med-7B']. Semaphore: Semaphore(value=5, locked=False). global_counter: 1
2023-11-15 05:04:22 | INFO | stdout | INFO:     127.0.0.1:35084 - "POST /worker_get_status HTTP/1.1" 200 OK
2023-11-15 05:04:27 | INFO | model_worker | Send heart beat. Models: ['LLaVA-Med-7B']. Semaphore: Semaphore(value=5, locked=False). global_counter: 1
2023-11-15 05:04:42 | INFO | model_worker | Send heart beat. Models: ['LLaVA-Med-7B']. Semaphore: Semaphore(value=5, locked=False). global_counter: 1
2023-11-15 05:04:56 | INFO | model_worker | Send heart beat. Models: ['LLaVA-Med-7B']. Semaphore: Semaphore(value=4, locked=False). global_counter: 2
2023-11-15 05:04:56 | INFO | stdout | INFO:     127.0.0.1:44014 - "POST /worker_generate_stream HTTP/1.1" 200 OK
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [54,0,0], thread: [0,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [54,0,0], thread: [1,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [54,0,0], thread: [2,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [54,0,0], thread: [3,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [54,0,0], thread: [4,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [54,0,0], thread: [5,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [54,0,0], thread: [6,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [54,0,0], thread: [7,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [54,0,0], thread: [8,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [54,0,0], thread: [9,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [54,0,0], thread: [10,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [54,0,0], thread: [11,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [54,0,0], thread: [12,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [54,0,0], thread: [13,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [54,0,0], thread: [14,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [54,0,0], thread: [15,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [54,0,0], thread: [16,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [54,0,0], thread: [17,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [54,0,0], thread: [18,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [54,0,0], thread: [19,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [54,0,0], thread: [20,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [54,0,0], thread: [21,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [54,0,0], thread: [22,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [54,0,0], thread: [23,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [54,0,0], thread: [24,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [54,0,0], thread: [25,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [54,0,0], thread: [26,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [54,0,0], thread: [27,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [54,0,0], thread: [28,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [54,0,0], thread: [29,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [54,0,0], thread: [30,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [54,0,0], thread: [31,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [176,0,0], thread: [64,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [176,0,0], thread: [65,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [176,0,0], thread: [66,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [176,0,0], thread: [67,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [176,0,0], thread: [68,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [176,0,0], thread: [69,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [176,0,0], thread: [70,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [176,0,0], thread: [71,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [176,0,0], thread: [72,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [176,0,0], thread: [73,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [176,0,0], thread: [74,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [176,0,0], thread: [75,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [176,0,0], thread: [76,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [176,0,0], thread: [77,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [176,0,0], thread: [78,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [176,0,0], thread: [79,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [176,0,0], thread: [80,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [176,0,0], thread: [81,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [176,0,0], thread: [82,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [176,0,0], thread: [83,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [176,0,0], thread: [84,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [176,0,0], thread: [85,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [176,0,0], thread: [86,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [176,0,0], thread: [87,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [176,0,0], thread: [88,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [176,0,0], thread: [89,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [176,0,0], thread: [90,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [176,0,0], thread: [91,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [176,0,0], thread: [92,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [176,0,0], thread: [93,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [176,0,0], thread: [94,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [176,0,0], thread: [95,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [176,0,0], thread: [0,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [176,0,0], thread: [1,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [176,0,0], thread: [2,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [176,0,0], thread: [3,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [176,0,0], thread: [4,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [176,0,0], thread: [5,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [176,0,0], thread: [6,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [176,0,0], thread: [7,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [176,0,0], thread: [8,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [176,0,0], thread: [9,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [176,0,0], thread: [10,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [176,0,0], thread: [11,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [176,0,0], thread: [12,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [176,0,0], thread: [13,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [176,0,0], thread: [14,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [176,0,0], thread: [15,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [176,0,0], thread: [16,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [176,0,0], thread: [17,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [176,0,0], thread: [18,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [176,0,0], thread: [19,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [176,0,0], thread: [20,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [176,0,0], thread: [21,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [176,0,0], thread: [22,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [176,0,0], thread: [23,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [176,0,0], thread: [24,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [176,0,0], thread: [25,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [176,0,0], thread: [26,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [176,0,0], thread: [27,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [176,0,0], thread: [28,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [176,0,0], thread: [29,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [176,0,0], thread: [30,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [176,0,0], thread: [31,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
2023-11-15 05:04:56 | ERROR | stderr | ERROR:    Exception in ASGI application
2023-11-15 05:04:56 | ERROR | stderr | Traceback (most recent call last):
2023-11-15 05:04:56 | ERROR | stderr |   File "/usr/local/lib/python3.10/dist-packages/uvicorn/protocols/http/h11_impl.py", line 408, in run_asgi
2023-11-15 05:04:56 | ERROR | stderr |     result = await app(  # type: ignore[func-returns-value]
2023-11-15 05:04:56 | ERROR | stderr |   File "/usr/local/lib/python3.10/dist-packages/uvicorn/middleware/proxy_headers.py", line 84, in __call__
2023-11-15 05:04:56 | ERROR | stderr |     return await self.app(scope, receive, send)
2023-11-15 05:04:56 | ERROR | stderr |   File "/usr/local/lib/python3.10/dist-packages/fastapi/applications.py", line 1106, in __call__
2023-11-15 05:04:56 | ERROR | stderr |     await super().__call__(scope, receive, send)
2023-11-15 05:04:56 | ERROR | stderr |   File "/usr/local/lib/python3.10/dist-packages/starlette/applications.py", line 122, in __call__
2023-11-15 05:04:56 | ERROR | stderr |     await self.middleware_stack(scope, receive, send)
2023-11-15 05:04:56 | ERROR | stderr |   File "/usr/local/lib/python3.10/dist-packages/starlette/middleware/errors.py", line 184, in __call__
2023-11-15 05:04:56 | ERROR | stderr |     raise exc
2023-11-15 05:04:56 | ERROR | stderr |   File "/usr/local/lib/python3.10/dist-packages/starlette/middleware/errors.py", line 162, in __call__
2023-11-15 05:04:56 | ERROR | stderr |     await self.app(scope, receive, _send)
2023-11-15 05:04:56 | ERROR | stderr |   File "/usr/local/lib/python3.10/dist-packages/starlette/middleware/exceptions.py", line 79, in __call__
2023-11-15 05:04:56 | ERROR | stderr |     raise exc
2023-11-15 05:04:56 | ERROR | stderr |   File "/usr/local/lib/python3.10/dist-packages/starlette/middleware/exceptions.py", line 68, in __call__
2023-11-15 05:04:56 | ERROR | stderr |     await self.app(scope, receive, sender)
2023-11-15 05:04:56 | ERROR | stderr |   File "/usr/local/lib/python3.10/dist-packages/fastapi/middleware/asyncexitstack.py", line 20, in __call__
2023-11-15 05:04:56 | ERROR | stderr |     raise e
2023-11-15 05:04:56 | ERROR | stderr |   File "/usr/local/lib/python3.10/dist-packages/fastapi/middleware/asyncexitstack.py", line 17, in __call__
2023-11-15 05:04:56 | ERROR | stderr |     await self.app(scope, receive, send)
2023-11-15 05:04:56 | ERROR | stderr |   File "/usr/local/lib/python3.10/dist-packages/starlette/routing.py", line 718, in __call__
2023-11-15 05:04:56 | ERROR | stderr |     await route.handle(scope, receive, send)
2023-11-15 05:04:56 | ERROR | stderr |   File "/usr/local/lib/python3.10/dist-packages/starlette/routing.py", line 276, in handle
2023-11-15 05:04:56 | ERROR | stderr |     await self.app(scope, receive, send)
2023-11-15 05:04:56 | ERROR | stderr |   File "/usr/local/lib/python3.10/dist-packages/starlette/routing.py", line 69, in app
2023-11-15 05:04:56 | ERROR | stderr |     await response(scope, receive, send)
2023-11-15 05:04:56 | ERROR | stderr |   File "/usr/local/lib/python3.10/dist-packages/starlette/responses.py", line 270, in __call__
2023-11-15 05:04:56 | ERROR | stderr |     async with anyio.create_task_group() as task_group:
2023-11-15 05:04:56 | ERROR | stderr |   File "/usr/local/lib/python3.10/dist-packages/anyio/_backends/_asyncio.py", line 597, in __aexit__
2023-11-15 05:04:56 | ERROR | stderr |     raise exceptions[0]
2023-11-15 05:04:56 | ERROR | stderr |   File "/usr/local/lib/python3.10/dist-packages/starlette/responses.py", line 273, in wrap
2023-11-15 05:04:56 | ERROR | stderr |     await func()
2023-11-15 05:04:56 | ERROR | stderr |   File "/usr/local/lib/python3.10/dist-packages/starlette/responses.py", line 262, in stream_response
2023-11-15 05:04:56 | ERROR | stderr |     async for chunk in self.body_iterator:
2023-11-15 05:04:56 | ERROR | stderr |   File "/usr/local/lib/python3.10/dist-packages/starlette/concurrency.py", line 63, in iterate_in_threadpool
2023-11-15 05:04:56 | ERROR | stderr |     yield await anyio.to_thread.run_sync(_next, iterator)
2023-11-15 05:04:56 | ERROR | stderr |   File "/usr/local/lib/python3.10/dist-packages/anyio/to_thread.py", line 33, in run_sync
2023-11-15 05:04:56 | ERROR | stderr |     return await get_asynclib().run_sync_in_worker_thread(
2023-11-15 05:04:56 | ERROR | stderr |   File "/usr/local/lib/python3.10/dist-packages/anyio/_backends/_asyncio.py", line 877, in run_sync_in_worker_thread
2023-11-15 05:04:56 | ERROR | stderr |     return await future
2023-11-15 05:04:56 | ERROR | stderr |   File "/usr/local/lib/python3.10/dist-packages/anyio/_backends/_asyncio.py", line 807, in run
2023-11-15 05:04:56 | ERROR | stderr |     result = context.run(func, *args)
2023-11-15 05:04:56 | ERROR | stderr |   File "/usr/local/lib/python3.10/dist-packages/starlette/concurrency.py", line 53, in _next
2023-11-15 05:04:56 | ERROR | stderr |     return next(iterator)
2023-11-15 05:04:56 | ERROR | stderr |   File "/content/drive/MyDrive/colab/LLaVA-Med/llava/serve/model_worker.py", line 296, in generate_stream_gate
2023-11-15 05:04:56 | ERROR | stderr |     for x in self.generate_stream(params):
2023-11-15 05:04:56 | ERROR | stderr |   File "/usr/local/lib/python3.10/dist-packages/torch/utils/_contextlib.py", line 35, in generator_context
2023-11-15 05:04:56 | ERROR | stderr |     response = gen.send(None)
2023-11-15 05:04:56 | ERROR | stderr |   File "/content/drive/MyDrive/colab/LLaVA-Med/llava/serve/model_worker.py", line 243, in generate_stream
2023-11-15 05:04:56 | ERROR | stderr |     out = model(
2023-11-15 05:04:56 | ERROR | stderr |   File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1501, in _call_impl
2023-11-15 05:04:56 | ERROR | stderr |     return forward_call(*args, **kwargs)
2023-11-15 05:04:56 | ERROR | stderr |   File "/content/drive/MyDrive/colab/LLaVA-Med/llava/model/llava.py", line 306, in forward
2023-11-15 05:04:56 | ERROR | stderr |     outputs = self.model(
2023-11-15 05:04:56 | ERROR | stderr |   File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1501, in _call_impl
2023-11-15 05:04:56 | ERROR | stderr |     return forward_call(*args, **kwargs)
2023-11-15 05:04:56 | ERROR | stderr |   File "/content/drive/MyDrive/colab/LLaVA-Med/llava/model/llava.py", line 198, in forward
2023-11-15 05:04:56 | ERROR | stderr |     inputs_embeds = self.embed_tokens(input_ids)
2023-11-15 05:04:56 | ERROR | stderr |   File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1501, in _call_impl
2023-11-15 05:04:56 | ERROR | stderr |     return forward_call(*args, **kwargs)
2023-11-15 05:04:56 | ERROR | stderr |   File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/sparse.py", line 162, in forward
2023-11-15 05:04:56 | ERROR | stderr |     return F.embedding(
2023-11-15 05:04:56 | ERROR | stderr |   File "/usr/local/lib/python3.10/dist-packages/torch/nn/functional.py", line 2210, in embedding
2023-11-15 05:04:56 | ERROR | stderr |     return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
2023-11-15 05:04:56 | ERROR | stderr | RuntimeError: CUDA error: device-side assert triggered
2023-11-15 05:04:56 | ERROR | stderr | Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
2023-11-15 05:04:56 | ERROR | stderr | 
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [54,0,0], thread: [96,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [54,0,0], thread: [97,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [54,0,0], thread: [98,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [54,0,0], thread: [99,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [54,0,0], thread: [100,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [54,0,0], thread: [101,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [54,0,0], thread: [102,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [54,0,0], thread: [103,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [54,0,0], thread: [104,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [54,0,0], thread: [105,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [54,0,0], thread: [106,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [54,0,0], thread: [107,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [54,0,0], thread: [108,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [54,0,0], thread: [109,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [54,0,0], thread: [110,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [54,0,0], thread: [111,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [54,0,0], thread: [112,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [54,0,0], thread: [113,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [54,0,0], thread: [114,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [54,0,0], thread: [115,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [54,0,0], thread: [116,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [54,0,0], thread: [117,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [54,0,0], thread: [118,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [54,0,0], thread: [119,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [54,0,0], thread: [120,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [54,0,0], thread: [121,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [54,0,0], thread: [122,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [54,0,0], thread: [123,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [54,0,0], thread: [124,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [54,0,0], thread: [125,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [54,0,0], thread: [126,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [54,0,0], thread: [127,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
2023-11-15 05:04:57 | INFO | model_worker | Send heart beat. Models: ['LLaVA-Med-7B']. Semaphore: Semaphore(value=4, locked=False). global_counter: 2
2023-11-15 05:05:12 | INFO | model_worker | Send heart beat. Models: ['LLaVA-Med-7B']. Semaphore: Semaphore(value=4, locked=False). global_counter: 2
2023-11-15 05:05:27 | INFO | model_worker | Send heart beat. Models: ['LLaVA-Med-7B']. Semaphore: Semaphore(value=4, locked=False). global_counter: 2

Test message seems to work

/colab# python -m llava.serve.test_message --model-name LLaVA-Med-7B --controller http://localhost:10000
2023-11-15 05:03:59.106879: E tensorflow/compiler/xla/stream_executor/cuda/cuda_dnn.cc:9342] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
2023-11-15 05:03:59.106938: E tensorflow/compiler/xla/stream_executor/cuda/cuda_fft.cc:609] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
2023-11-15 05:03:59.106979: E tensorflow/compiler/xla/stream_executor/cuda/cuda_blas.cc:1518] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
2023-11-15 05:04:00.409334: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
Models: ['LLaVA-Med-7B']
worker_addr: http://localhost:40000
A chat between a curious human and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the human's questions.
Human: What are the key differences between renewable and non-renewable energy sources?
Assistant: Renewable energy sources are those that can be replenished naturally in a relatively short amount of time, such as solar, wind, hydro, geothermal, and biomass. Non-renewable energy sources, on the other hand, are finite and will eventually be depleted, such as coal, oil, and natural gas. Here are some key differences between renewable and non-renewable energy sources:
1. Availability: Renewable energy sources are virtually inexhaustible, while non-renewable energy sources are finite and will eventually run out.
2. Environmental impact: Renewable energy sources have a much lower environmental impact than non-renewable sources, which can lead to air and water pollution, greenhouse gas emissions, and other negative effects.
3. Cost: Renewable energy sources can be more expensive to initially set up, but they typically have lower operational costs than non-renewable sources.
4. Reliability: Renewable energy sources are often more reliable and can be used in more remote locations than non-renewable sources.
5. Flexibility: Renewable energy sources are often more flexible and can be adapted to different situations and needs, while non-renewable sources are more rigid and inflexible.
6. Sustainability: Renewable energy sources are more sustainable over the long term, while non-renewable sources are not, and their depletion can lead to economic and social instability.

Human: Hello do you understand images?.
 Assistant: Yes, I understand the concept of images.##

Gradio web server seems okay

/colab# python -m llava.serve.gradio_web_server --controller http://localhost:10000
2023-11-15 05:04:13.758783: E tensorflow/compiler/xla/stream_executor/cuda/cuda_dnn.cc:9342] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
2023-11-15 05:04:13.758844: E tensorflow/compiler/xla/stream_executor/cuda/cuda_fft.cc:609] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
2023-11-15 05:04:13.758893: E tensorflow/compiler/xla/stream_executor/cuda/cuda_blas.cc:1518] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
2023-11-15 05:04:15.026392: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
2023-11-15 05:04:21 | INFO | gradio_web_server | args: Namespace(host='0.0.0.0', port=None, controller_url='http://localhost:10000', concurrency_count=8, model_list_mode='once', share=True, moderate=False, embed=False)
2023-11-15 05:04:22 | INFO | gradio_web_server | Models: ['LLaVA-Med-7B']
2023-11-15 05:04:22 | INFO | gradio_web_server | Namespace(host='0.0.0.0', port=None, controller_url='http://localhost:10000', concurrency_count=8, model_list_mode='once', share=True, moderate=False, embed=False)
2023-11-15 05:04:28 | INFO | stdout | Running on local URL:  http://0.0.0.0:7860
2023-11-15 05:04:40 | INFO | stdout | Running on public URL: https://4694634dd5994a0111.gradio.live
2023-11-15 05:04:40 | INFO | stdout | 
2023-11-15 05:04:40 | INFO | stdout | This share link expires in 72 hours. For free permanent hosting and GPU upgrades (NEW!), check out Spaces: https://huggingface.co/spaces
2023-11-15 05:04:49 | INFO | gradio_web_server | load_demo. ip: 172.31.59.51. params: {}
2023-11-15 05:04:49 | INFO | httpx | HTTP Request: POST http://localhost:7860/api/predict "HTTP/1.1 200 OK"
2023-11-15 05:04:49 | INFO | httpx | HTTP Request: POST http://localhost:7860/reset "HTTP/1.1 200 OK"
2023-11-15 05:04:54 | INFO | gradio_web_server | add_text. ip: 172.31.59.51. len: 25
2023-11-15 05:04:54 | INFO | httpx | HTTP Request: POST http://localhost:7860/api/predict "HTTP/1.1 200 OK"
2023-11-15 05:04:54 | INFO | httpx | HTTP Request: POST http://localhost:7860/reset "HTTP/1.1 200 OK"
2023-11-15 05:04:56 | INFO | gradio_web_server | http_bot. ip: 172.31.59.51
2023-11-15 05:04:56 | INFO | gradio_web_server | model_name: LLaVA-Med-7B, worker_addr: http://localhost:40000
2023-11-15 05:04:56 | INFO | gradio_web_server | ==== request ====
{'model': 'LLaVA-Med-7B', 'prompt': 'You are LLaVA-Med, a large language and vision assistant trained by a group of researchers at Microsoft, based on the general domain LLaVA architecture.You are able to understand the visual content that the user provides, and assist the user with a variety of medical and clinical tasks using natural language.Follow the instructions carefully and explain your answers in detail.###Human: Hi!###Assistant: Hi there!  How can I help you today?\n###Human: What is this image about?\n<image>###Assistant:', 'temperature': 0.2, 'max_new_tokens': 512, 'stop': '###', 'images': "List of 1 images: ['c98a0941c1b4e20feb7ac190b92615ba']"}
2023-11-15 05:04:56 | INFO | httpx | HTTP Request: POST http://localhost:7860/api/predict "HTTP/1.1 200 OK"
2023-11-15 05:04:56 | INFO | httpx | HTTP Request: POST http://localhost:7860/api/predict "HTTP/1.1 200 OK"
2023-11-15 05:04:56 | INFO | httpx | HTTP Request: POST http://localhost:7860/api/predict "HTTP/1.1 200 OK"
2023-11-15 05:04:56 | INFO | httpx | HTTP Request: POST http://localhost:7860/reset "HTTP/1.1 200 OK"

Controller seems alright

/colab/LLaVA-Med# python -m llava.serve.controller --host 0.0.0.0 --port 10000
2023-11-15 04:59:00.724462: E tensorflow/compiler/xla/stream_executor/cuda/cuda_dnn.cc:9342] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
2023-11-15 04:59:00.724523: E tensorflow/compiler/xla/stream_executor/cuda/cuda_fft.cc:609] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
2023-11-15 04:59:00.724565: E tensorflow/compiler/xla/stream_executor/cuda/cuda_blas.cc:1518] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
2023-11-15 04:59:01.981538: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
2023-11-15 04:59:08 | INFO | controller | args: Namespace(host='0.0.0.0', port=10000, dispatch_method='shortest_queue')
2023-11-15 04:59:09 | INFO | controller | Init controller
2023-11-15 04:59:09 | ERROR | stderr | INFO:     Started server process [8938]
2023-11-15 04:59:09 | ERROR | stderr | INFO:     Waiting for application startup.
2023-11-15 04:59:09 | ERROR | stderr | INFO:     Application startup complete.
2023-11-15 04:59:09 | ERROR | stderr | INFO:     Uvicorn running on http://0.0.0.0:10000 (Press CTRL+C to quit)
2023-11-15 05:03:42 | INFO | controller | Register a new worker: http://localhost:40000
2023-11-15 05:03:42 | INFO | controller | Register done: http://localhost:40000, {'model_names': ['LLaVA-Med-7B'], 'speed': 1, 'queue_length': 0}
2023-11-15 05:03:42 | INFO | stdout | INFO:     127.0.0.1:60810 - "POST /register_worker HTTP/1.1" 200 OK
2023-11-15 05:03:57 | INFO | controller | Receive heart beat. http://localhost:40000
2023-11-15 05:03:57 | INFO | stdout | INFO:     127.0.0.1:34756 - "POST /receive_heart_beat HTTP/1.1" 200 OK
2023-11-15 05:04:03 | INFO | controller | Register a new worker: http://localhost:40000
2023-11-15 05:04:03 | INFO | controller | Register done: http://localhost:40000, {'model_names': ['LLaVA-Med-7B'], 'speed': 1, 'queue_length': 0}
2023-11-15 05:04:03 | INFO | stdout | INFO:     127.0.0.1:34760 - "POST /refresh_all_workers HTTP/1.1" 200 OK
2023-11-15 05:04:03 | INFO | stdout | INFO:     127.0.0.1:34770 - "POST /list_models HTTP/1.1" 200 OK
2023-11-15 05:04:03 | INFO | controller | names: ['http://localhost:40000'], queue_lens: [0.0], ret: http://localhost:40000
2023-11-15 05:04:03 | INFO | stdout | INFO:     127.0.0.1:34772 - "POST /get_worker_address HTTP/1.1" 200 OK
2023-11-15 05:04:03 | INFO | controller | Receive heart beat. http://localhost:40000
2023-11-15 05:04:03 | INFO | stdout | INFO:     127.0.0.1:34776 - "POST /receive_heart_beat HTTP/1.1" 200 OK
2023-11-15 05:04:06 | INFO | controller | Receive heart beat. http://localhost:40000
2023-11-15 05:04:06 | INFO | stdout | INFO:     127.0.0.1:60772 - "POST /receive_heart_beat HTTP/1.1" 200 OK
2023-11-15 05:04:12 | INFO | controller | Receive heart beat. http://localhost:40000
2023-11-15 05:04:12 | INFO | stdout | INFO:     127.0.0.1:60774 - "POST /receive_heart_beat HTTP/1.1" 200 OK
2023-11-15 05:04:22 | INFO | controller | Register a new worker: http://localhost:40000
2023-11-15 05:04:22 | INFO | controller | Register done: http://localhost:40000, {'model_names': ['LLaVA-Med-7B'], 'speed': 1, 'queue_length': 0}
2023-11-15 05:04:22 | INFO | stdout | INFO:     127.0.0.1:34040 - "POST /refresh_all_workers HTTP/1.1" 200 OK
2023-11-15 05:04:22 | INFO | stdout | INFO:     127.0.0.1:34054 - "POST /list_models HTTP/1.1" 200 OK
2023-11-15 05:04:27 | INFO | controller | Receive heart beat. http://localhost:40000
2023-11-15 05:04:27 | INFO | stdout | INFO:     127.0.0.1:40518 - "POST /receive_heart_beat HTTP/1.1" 200 OK
2023-11-15 05:04:42 | INFO | controller | Receive heart beat. http://localhost:40000
2023-11-15 05:04:42 | INFO | stdout | INFO:     127.0.0.1:55936 - "POST /receive_heart_beat HTTP/1.1" 200 OK
2023-11-15 05:04:56 | INFO | controller | names: ['http://localhost:40000'], queue_lens: [0.0], ret: http://localhost:40000
2023-11-15 05:04:56 | INFO | stdout | INFO:     127.0.0.1:37670 - "POST /get_worker_address HTTP/1.1" 200 OK
2023-11-15 05:04:56 | INFO | controller | Receive heart beat. http://localhost:40000
2023-11-15 05:04:56 | INFO | stdout | INFO:     127.0.0.1:37678 - "POST /receive_heart_beat HTTP/1.1" 200 OK
2023-11-15 05:04:57 | INFO | controller | Receive heart beat. http://localhost:40000
2023-11-15 05:04:57 | INFO | stdout | INFO:     127.0.0.1:37680 - "POST /receive_heart_beat HTTP/1.1" 200 OK
2023-11-15 05:05:12 | INFO | controller | Receive heart beat. http://localhost:40000
2023-11-15 05:05:12 | INFO | stdout | INFO:     127.0.0.1:58476 - "POST /receive_heart_beat HTTP/1.1" 200 OK
2023-11-15 05:05:27 | INFO | controller | Receive heart beat. http://localhost:40000
2023-11-15 05:05:27 | INFO | stdout | INFO:     127.0.0.1:35584 - "POST /receive_heart_beat HTTP/1.1" 200 OK
2023-11-15 05:05:42 | INFO | controller | Receive heart beat. http://localhost:40000
2023-11-15 05:05:42 | INFO | stdout | INFO:     127.0.0.1:42566 - "POST /receive_heart_beat HTTP/1.1" 200 OK
2023-11-15 05:05:57 | INFO | controller | Receive heart beat. http://localhost:40000
2023-11-15 05:05:57 | INFO | stdout | INFO:     127.0.0.1:54540 - "POST /receive_heart_beat HTTP/1.1" 200 OK
2023-11-15 05:06:12 | INFO | controller | Receive heart beat. http://localhost:40000
2023-11-15 05:06:12 | INFO | stdout | INFO:     127.0.0.1:49088 - "POST /receive_heart_beat HTTP/1.1" 200 OK
2023-11-15 05:06:27 | INFO | controller | Receive heart beat. http://localhost:40000
2023-11-15 05:06:27 | INFO | stdout | INFO:     127.0.0.1:37618 - "POST /receive_heart_beat HTTP/1.1" 200 OK
2023-11-15 05:06:42 | INFO | controller | Receive heart beat. http://localhost:40000
2023-11-15 05:06:42 | INFO | stdout | INFO:     127.0.0.1:43752 - "POST /receive_heart_beat HTTP/1.1" 200 OK
2023-11-15 05:06:57 | INFO | controller | Receive heart beat. http://localhost:40000
2023-11-15 05:06:57 | INFO | stdout | INFO:     127.0.0.1:51120 - "POST /receive_heart_beat HTTP/1.1" 200 OK
2023-11-15 05:07:12 | INFO | controller | Receive heart beat. http://localhost:40000
2023-11-15 05:07:12 | INFO | stdout | INFO:     127.0.0.1:58844 - "POST /receive_heart_beat HTTP/1.1" 200 OK

Some potential package issues on setup but nothing major as far as I can tell

ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
torchdata 0.7.0 requires torch==2.1.0, but you have torch 2.0.0+cu117 which is incompatible.
torchtext 0.16.0 requires torch==2.1.0, but you have torch 2.0.0+cu117 which is incompatible.
ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
llmx 0.0.15a0 requires cohere, which is not installed.
llmx 0.0.15a0 requires tiktoken, which is not installed.
ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
lida 0.0.10 requires kaleido, which is not installed.
tensorflow-probability 0.22.0 requires typing-extensions<4.6.0, but you have typing-extensions 4.8.0 which is incompatible.
torchdata 0.7.0 requires torch==2.1.0, but you have torch 2.0.0+cu117 which is incompatible.
torchtext 0.16.0 requires torch==2.1.0, but you have torch 2.0.0+cu117 which is incompatible.

Specifications

image
atultiwari commented 7 months ago

Hi, did you find a solution to this problem. I also want to test this repo, but I am stuck at very beginning steps. I have posted my issue here can you please let me know how were you able to run it this far? if you can share your colab notebook, that would be a great help. Thank you Regards, Dr. Atul

atultiwari commented 7 months ago

Hi, I subscribed to colab pro plus, just in the hope to be able to replicate you. I created the models as per instructions and your suggestions, but I even couldnt get the output as you are getting, instead I am getting ConnectionRefusedError error, posted the issue here

My colab notebook link for reference

Can you please share the link of your notebook. That would be a great help in finding out where I am making mistake. Thank you

yihp commented 5 months ago

@HaotianHuang Hello, I also encountered the same problem, have you solved it?