kvablack / LLaVA-server

MIT License
19 stars 5 forks source link

gunicorn call automatically shuts down server hosting #1

Closed bhattg closed 11 months ago

bhattg commented 1 year ago

Hi,

I am trying to host the server with mentioned command CUDA_VISIBLE_DEVICES=0,1 gunicorn "app:create_app()". However, soon after initialization it closes it automatically. Attached is the following trace:

[2023-07-12 21:25:19 +0000] [1061000] [INFO] Starting gunicorn 20.1.0                                                                                                                                                                 
[2023-07-12 21:25:19 +0000] [1061000] [INFO] Listening at: http://127.0.0.1:8085 (1061000)                                                                                                                                            
[2023-07-12 21:25:19 +0000] [1061000] [INFO] Using worker: sync                                                                                                                                                                       
[2023-07-12 21:25:19 +0000] [1061001] [INFO] Booting worker with pid: 1061001                                                                                                                                                         
[2023-07-12 21:25:20 +0000] [1061065] [INFO] Booting worker with pid: 1061065      
- This IS NOT expected if you are initializing CLIPVisionModel from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
Some weights of the model checkpoint at openai/clip-vit-large-patch14 were not used when initializing CLIPVisionModel: ['text_model.encoder.layers.0.self_attn.v_proj.bias', 'text_model.encoder.layers.6.self_attn.q_proj.bias', 'tex
t_model.encoder.layers.0.self_attn.v_proj.weight', 'text_model.encoder.layers.3.self_attn.k_proj.bias', 'text_model.encoder.layers.10.self_attn.k_proj.bias', 'text_model.encoder.layers.1.self_attn.k_proj.weight', 'text_model.encod
er.layers.8.self_attn.v_proj.bias', 'text_model.encoder.layers.0.layer_norm1.bias', 'text_model.encoder.layers.3.self_attn.q_proj.weight', 'text_model.encoder.layers.4.self_attn.out_proj.weight', 'text_model.encoder.layers.10.self
_attn.out_proj.weight', 'text_model.encoder.layers.9.mlp.fc2.weight', 'text_model.encoder.layers.1.self_attn.q_proj.bias', 'text_model.encoder.layers.0.mlp.fc2.bias', 'text_model.encoder.layers.7.self_attn.out_proj.bias', 'text_mo
del.encoder.layers.5.self_attn.v_proj.bias', 'text_model.encoder.layers.6.self_attn.out_proj.weight', 'text_model.encoder.layers.1.layer_norm2.weight', 'text_model.encoder.layers.5.layer_norm1.weight', 'text_model.encoder.layers.2
.layer_norm1.weight', 'text_model.encoder.layers.11.mlp.fc2.bias', 'text_model.encoder.layers.7.self_attn.k_proj.weight', 'text_model.encoder.layers.2.mlp.fc2.weight', 'text_model.encoder.layers.7.mlp.fc1.bias', 'text_model.encode
r.layers.1.mlp.fc2.weight', 'text_model.encoder.layers.10.mlp.fc2.bias', 'text_model.encoder.layers.2.self_attn.q_proj.bias', 'text_model.encoder.layers.5.self_attn.q_proj.weight', 'text_model.encoder.layers.0.mlp.fc1.bias', 'text
_model.encoder.layers.9.layer_norm1.bias', 'text_model.encoder.layers.8.self_attn.q_proj.weight', 'text_model.encoder.layers.11.self_attn.k_proj.bias', 'text_model.encoder.layers.0.self_attn.k_proj.bias', 'text_model.encoder.layer
s.11.layer_norm1.weight', 'text_model.encoder.layers.9.self_attn.k_proj.weight', 'text_model.encoder.layers.0.mlp.fc1.weight', 'text_model.encoder.layers.5.mlp.fc1.weight', 'text_model.encoder.layers.11.self_attn.q_proj.bias', 'te
xt_model.encoder.layers.4.layer_norm1.bias', 'text_model.encoder.layers.0.self_attn.q_proj.weight', 'text_model.encoder.layers.2.self_attn.q_proj.weight', 'text_model.encoder.layers.6.mlp.fc2.weight', 'text_model.encoder.layers.3.
layer_norm2.bias', 'text_model.encoder.layers.10.self_attn.q_proj.bias', 'text_model.encoder.layers.0.layer_norm2.bias', 'text_model.encoder.layers.4.layer_norm2.weight', 'text_model.encoder.layers.2.mlp.fc1.weight', 'text_model.e
ncoder.layers.11.mlp.fc1.bias', 'visual_projection.weight', 'text_model.encoder.layers.11.layer_norm2.weight', 'text_model.encoder.layers.0.self_attn.out_proj.bias', 'text_model.encoder.layers.11.mlp.fc1.weight', 'text_model.encod
er.layers.3.self_attn.q_proj.bias', 'text_model.encoder.layers.7.mlp.fc1.weight', 'text_model.encoder.layers.1.self_attn.k_proj.bias', 'text_model.encoder.layers.7.self_attn.q_proj.bias', 'text_model.encoder.layers.8.self_attn.k_p
roj.weight', 'text_model.encoder.layers.7.mlp.fc2.weight', 'text_model.encoder.layers.2.layer_norm2.weight', 'text_model.encoder.layers.9.layer_norm1.weight', 'text_model.encoder.layers.10.mlp.fc1.weight', 'text_model.encoder.laye
rs.1.mlp.fc1.bias', 'text_model.encoder.layers.10.self_attn.v_proj.weight', 'text_model.encoder.layers.0.self_attn.q_proj.bias', 'text_model.encoder.layers.4.self_attn.q_proj.bias', 'text_model.encoder.layers.5.self_attn.k_proj.bi
as', 'text_model.encoder.layers.1.mlp.fc1.weight', 'text_model.encoder.layers.6.mlp.fc1.weight', 'text_model.encoder.layers.4.mlp.fc1.weight', 'text_model.encoder.layers.11.self_attn.out_proj.weight', 'text_model.encoder.layers.1.
layer_norm2.bias', 'text_model.encoder.layers.0.layer_norm1.weight', 'text_model.encoder.layers.4.self_attn.out_proj.bias', 'text_model.encoder.layers.0.mlp.fc2.weight', 'text_model.encoder.layers.9.layer_norm2.bias', 'text_model.
encoder.layers.6.self_attn.out_proj.bias', 'text_model.final_layer_norm.bias', 'text_model.encoder.layers.3.mlp.fc2.weight', 'text_model.encoder.layers.3.layer_norm1.weight', 'text_model.encoder.layers.10.mlp.fc1.bias', 'text_mode
l.encoder.layers.6.self_attn.v_proj.weight', 'text_model.encoder.layers.4.self_attn.k_proj.bias', 'text_model.encoder.layers.7.layer_norm2.weight', 'text_model.encoder.layers.4.layer_norm1.weight', 'text_model.encoder.layers.4.lay
er_norm2.bias', 'text_model.encoder.layers.7.layer_norm1.weight', 'text_projection.weight', 'text_model.encoder.layers.6.self_attn.v_proj.bias', 'text_model.encoder.layers.11.self_attn.q_proj.weight', 'text_model.encoder.layers.4.
self_attn.q_proj.weight', 'text_model.encoder.layers.11.self_attn.out_proj.bias', 'text_model.encoder.layers.8.self_attn.out_proj.bias', 'text_model.encoder.layers.6.layer_norm2.weight', 'text_model.encoder.layers.8.layer_norm1.bi
as', 'text_model.encoder.layers.5.self_attn.k_proj.weight', 'text_model.encoder.layers.9.mlp.fc1.weight', 'text_model.encoder.layers.6.layer_norm1.bias', 'text_model.encoder.layers.11.self_attn.v_proj.weight', 'text_model.encoder.
layers.8.self_attn.k_proj.bias', 'text_model.embeddings.position_embedding.weight', 'text_model.encoder.layers.3.mlp.fc1.weight', 'text_model.encoder.layers.4.mlp.fc1.bias', 'text_model.encoder.layers.9.self_attn.v_proj.weight', '
text_model.encoder.layers.10.self_attn.v_proj.bias', 'text_model.encoder.layers.2.mlp.fc1.bias', 'text_model.encoder.layers.9.self_attn.v_proj.bias', 'text_model.encoder.layers.3.layer_norm1.bias', 'text_model.encoder.layers.7.sel
f_attn.v_proj.bias', 'text_model.encoder.layers.7.self_attn.q_proj.weight', 'text_model.encoder.layers.3.layer_norm2.weight', 'text_model.encoder.layers.11.self_attn.v_proj.bias', 'text_model.encoder.layers.9.mlp.fc2.bias', 'text_
model.embeddings.token_embedding.weight', 'text_model.encoder.layers.5.self_attn.out_proj.bias', 'text_model.encoder.layers.2.layer_norm2.bias', 'text_model.encoder.layers.6.self_attn.k_proj.bias', 'text_model.encoder.layers.2.lay
er_norm1.bias', 'text_model.encoder.layers.1.mlp.fc2.bias', 'text_model.encoder.layers.11.layer_norm1.bias', 'text_model.encoder.layers.9.self_attn.k_proj.bias', 'text_model.encoder.layers.4.self_attn.k_proj.weight', 'text_model.e
ncoder.layers.1.self_attn.out_proj.bias', 'text_model.encoder.layers.5.layer_norm2.weight', 'text_model.encoder.layers.3.self_attn.v_proj.bias', 'text_model.encoder.layers.1.self_attn.out_proj.weight', 'text_model.encoder.layers.4
.mlp.fc2.weight', 'text_model.encoder.layers.7.layer_norm2.bias', 'text_model.encoder.layers.10.mlp.fc2.weight', 'text_model.encoder.layers.8.self_attn.out_proj.weight', 'text_model.encoder.layers.7.layer_norm1.bias', 'text_model.
encoder.layers.5.mlp.fc1.bias', 'text_model.encoder.layers.8.self_attn.q_proj.bias', 'text_model.encoder.layers.8.mlp.fc1.weight', 'text_model.encoder.layers.2.self_attn.k_proj.weight', 'text_model.encoder.layers.3.self_attn.k_pro
j.weight', 'text_model.encoder.layers.2.mlp.fc2.bias', 'text_model.encoder.layers.1.self_attn.v_proj.bias', 'text_model.encoder.layers.8.layer_norm2.weight', 'text_model.encoder.layers.6.layer_norm2.bias', 'text_model.encoder.laye
rs.9.self_attn.out_proj.weight', 'text_model.encoder.layers.5.mlp.fc2.bias', 'text_model.encoder.layers.4.self_attn.v_proj.bias', 'text_model.encoder.layers.3.mlp.fc1.bias', 'text_model.encoder.layers.2.self_attn.out_proj.bias', '
text_model.encoder.layers.10.layer_norm1.weight', 'text_model.encoder.layers.2.self_attn.v_proj.weight', 'text_model.encoder.layers.3.self_attn.out_proj.bias', 'text_model.encoder.layers.9.self_attn.out_proj.bias', 'text_model.enc
oder.layers.1.layer_norm1.weight', 'text_model.encoder.layers.6.self_attn.k_proj.weight', 'text_model.embeddings.position_ids', 'text_model.encoder.layers.10.self_attn.k_proj.weight', 'text_model.encoder.layers.1.self_attn.v_proj.
weight', 'text_model.encoder.layers.3.self_attn.v_proj.weight', 'text_model.encoder.layers.7.self_attn.out_proj.weight', 'text_model.encoder.layers.0.self_attn.k_proj.weight', 'text_model.encoder.layers.2.self_attn.v_proj.bias', '
text_model.encoder.layers.7.self_attn.k_proj.bias', 'text_model.encoder.layers.10.self_attn.q_proj.weight', 'text_model.encoder.layers.8.mlp.fc2.bias', 'text_model.encoder.layers.0.self_attn.out_proj.weight', 'text_model.encoder.l
ayers.1.self_attn.q_proj.weight', 'text_model.final_layer_norm.weight', 'text_model.encoder.layers.3.mlp.fc2.bias', 'text_model.encoder.layers.2.self_attn.out_proj.weight', 'text_model.encoder.layers.11.layer_norm2.bias', 'text_mo
del.encoder.layers.11.self_attn.k_proj.weight', 'text_model.encoder.layers.6.self_attn.q_proj.weight', 'text_model.encoder.layers.11.mlp.fc2.weight', 'text_model.encoder.layers.4.self_attn.v_proj.weight', 'text_model.encoder.layer
s.5.self_attn.q_proj.bias', 'text_model.encoder.layers.6.mlp.fc2.bias', 'text_model.encoder.layers.5.self_attn.v_proj.weight', 'text_model.encoder.layers.7.mlp.fc2.bias', 'text_model.encoder.layers.10.self_attn.out_proj.bias', 'te
xt_model.encoder.layers.0.layer_norm2.weight', 'text_model.encoder.layers.8.mlp.fc2.weight', 'text_model.encoder.layers.10.layer_norm1.bias', 'text_model.encoder.layers.5.self_attn.out_proj.weight', 'text_model.encoder.layers.9.se
lf_attn.q_proj.weight', 'text_model.encoder.layers.6.layer_norm1.weight', 'text_model.encoder.layers.3.self_attn.out_proj.weight', 'text_model.encoder.layers.9.layer_norm2.weight', 'text_model.encoder.layers.9.mlp.fc1.bias', 'text
_model.encoder.layers.2.self_attn.k_proj.bias', 'text_model.encoder.layers.10.layer_norm2.bias', 'text_model.encoder.layers.7.self_attn.v_proj.weight', 'text_model.encoder.layers.9.self_attn.q_proj.bias', 'logit_scale', 'text_mode
l.encoder.layers.5.layer_norm2.bias', 'text_model.encoder.layers.6.mlp.fc1.bias', 'text_model.encoder.layers.8.self_attn.v_proj.weight', 'text_model.encoder.layers.4.mlp.fc2.bias', 'text_model.encoder.layers.8.layer_norm2.bias', '
text_model.encoder.layers.5.layer_norm1.bias', 'text_model.encoder.layers.8.mlp.fc1.bias', 'text_model.encoder.layers.8.layer_norm1.weight', 'text_model.encoder.layers.1.layer_norm1.bias', 'text_model.encoder.layers.10.layer_norm2
.weight', 'text_model.encoder.layers.5.mlp.fc2.weight']
- This IS expected if you are initializing CLIPVisionModel from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining mode
l).
- This IS NOT expected if you are initializing CLIPVisionModel from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
Loading checkpoint shards: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:06<00:00,  3.14s/it]
[2023-07-12 21:25:50 +0000] [1061000] [INFO] Shutting down: Master

Is the run argument missing something? In the app.py I do see that the create_app() returns a Flask object, but the run command is missing from gunicorn call (which was there in app.py).

kvablack commented 1 year ago

Hmm, that is mysterious. Unfortunately, I am not able to reproduce this. A couple ideas: