DLYuanGod / TinyGPT-V

TinyGPT-V: Efficient Multimodal Large Language Model via Small Backbones
BSD 3-Clause "New" or "Revised" License
1.24k stars 76 forks source link

connection refused and not create share link #18

Open beautyhansonboy opened 9 months ago

beautyhansonboy commented 9 months ago

(tinygptv) zhaoyichen@ubuntu-Precision-7920-Tower:~/TinyGPT-V$ python demo_v2.py --cfg-path eval_configs/tinygptv_stage4_eval.yaml --gpu-id 0

===================================BUG REPORT=================================== Welcome to bitsandbytes. For bug reports, please submit your error trace to: https://github.com/TimDettmers/bitsandbytes/issues

Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained. Initializing Chat Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained. Some weights of PhiForCausalLM were not initialized from the model checkpoint at /home/data/zhaoyichen/tinygpt/tinygpt-phi2/ and are newly initialized: ['model.layers.20.self_attn.q_layernorm.weight', 'model.layers.20.self_attn.k_layernorm.weight', 'model.layers.25.self_attn.q_layernorm.bias', 'model.layers.4.self_attn.k_layernorm.bias', 'model.layers.22.self_attn.k_layernorm.bias', 'model.layers.30.post_layernorm.weight', 'model.layers.12.self_attn.k_layernorm.weight', 'model.layers.17.self_attn.k_layernorm.weight', 'model.layers.16.self_attn.k_layernorm.bias', 'model.layers.6.self_attn.k_layernorm.bias', 'model.layers.5.self_attn.q_layernorm.bias', 'model.layers.17.self_attn.q_layernorm.weight', 'model.layers.7.self_attn.q_layernorm.bias', 'model.layers.29.post_layernorm.weight', 'model.layers.30.self_attn.q_layernorm.bias', 'model.layers.10.self_attn.k_layernorm.bias', 'model.layers.28.self_attn.q_layernorm.bias', 'model.layers.6.self_attn.k_layernorm.weight', 'model.layers.30.self_attn.k_layernorm.weight', 'model.layers.26.self_attn.q_layernorm.weight', 'model.layers.19.self_attn.q_layernorm.weight', 'model.layers.9.self_attn.k_layernorm.bias', 'model.layers.21.post_layernorm.weight', 'model.layers.22.post_layernorm.weight', 'model.layers.20.self_attn.k_layernorm.bias', 'model.layers.28.self_attn.k_layernorm.weight', 'model.layers.0.self_attn.q_layernorm.weight', 'model.layers.21.self_attn.q_layernorm.bias', 'model.layers.0.self_attn.q_layernorm.bias', 'model.layers.24.self_attn.k_layernorm.weight', 'model.layers.30.self_attn.q_layernorm.weight', 'model.layers.8.self_attn.q_layernorm.bias', 'model.layers.18.post_layernorm.weight', 'model.layers.29.self_attn.k_layernorm.bias', 'model.layers.4.post_layernorm.weight', 'model.layers.24.self_attn.k_layernorm.bias', 'model.layers.31.self_attn.q_layernorm.weight', 'model.layers.19.self_attn.k_layernorm.bias', 'model.layers.10.post_layernorm.weight', 'model.layers.11.post_layernorm.weight', 'model.layers.13.self_attn.k_layernorm.bias', 'model.layers.15.self_attn.k_layernorm.weight', 'model.layers.8.self_attn.k_layernorm.weight', 'model.layers.14.self_attn.k_layernorm.weight', 'model.layers.22.self_attn.q_layernorm.bias', 'model.layers.14.self_attn.q_layernorm.bias', 'model.layers.17.self_attn.k_layernorm.bias', 'model.layers.7.self_attn.k_layernorm.bias', 'model.layers.17.self_attn.q_layernorm.bias', 'model.layers.12.self_attn.q_layernorm.bias', 'model.layers.18.self_attn.k_layernorm.bias', 'model.layers.22.self_attn.k_layernorm.weight', 'model.layers.29.self_attn.q_layernorm.bias', 'model.layers.4.self_attn.k_layernorm.weight', 'model.layers.15.post_layernorm.weight', 'model.layers.0.self_attn.k_layernorm.weight', 'model.layers.16.self_attn.q_layernorm.weight', 'model.layers.14.post_layernorm.weight', 'model.layers.26.self_attn.k_layernorm.weight', 'model.layers.3.self_attn.k_layernorm.weight', 'model.layers.19.self_attn.k_layernorm.weight', 'model.layers.3.post_layernorm.weight', 'model.layers.28.self_attn.q_layernorm.weight', 'model.layers.2.self_attn.k_layernorm.weight', 'model.layers.21.self_attn.k_layernorm.bias', 'model.layers.23.self_attn.k_layernorm.weight', 'model.layers.6.self_attn.q_layernorm.bias', 'model.layers.13.self_attn.q_layernorm.bias', 'model.layers.21.self_attn.q_layernorm.weight', 'model.layers.23.self_attn.k_layernorm.bias', 'model.layers.12.self_attn.q_layernorm.weight', 'model.layers.22.self_attn.q_layernorm.weight', 'model.layers.23.post_layernorm.weight', 'model.layers.10.self_attn.k_layernorm.weight', 'model.layers.20.self_attn.q_layernorm.bias', 'model.layers.11.self_attn.q_layernorm.weight', 'model.layers.11.self_attn.q_layernorm.bias', 'model.layers.1.self_attn.q_layernorm.weight', 'model.layers.27.self_attn.q_layernorm.bias', 'model.layers.0.self_attn.k_layernorm.bias', 'model.layers.9.self_attn.q_layernorm.weight', 'model.layers.2.self_attn.q_layernorm.weight', 'model.layers.4.self_attn.q_layernorm.bias', 'model.layers.24.self_attn.q_layernorm.weight', 'model.layers.16.self_attn.k_layernorm.weight', 'model.layers.5.self_attn.q_layernorm.weight', 'model.layers.27.self_attn.q_layernorm.weight', 'model.layers.15.self_attn.k_layernorm.bias', 'model.layers.3.self_attn.k_layernorm.bias', 'model.layers.29.self_attn.q_layernorm.weight', 'model.layers.4.self_attn.q_layernorm.weight', 'model.layers.1.self_attn.q_layernorm.bias', 'model.layers.16.post_layernorm.weight', 'model.layers.25.self_attn.k_layernorm.bias', 'model.layers.15.self_attn.q_layernorm.bias', 'model.layers.31.self_attn.k_layernorm.weight', 'model.layers.7.post_layernorm.weight', 'model.layers.6.post_layernorm.weight', 'model.layers.2.self_attn.q_layernorm.bias', 'model.layers.11.self_attn.k_layernorm.weight', 'model.layers.25.self_attn.q_layernorm.weight', 'model.layers.31.self_attn.q_layernorm.bias', 'model.layers.26.post_layernorm.weight', 'model.layers.27.self_attn.k_layernorm.bias', 'model.layers.3.self_attn.q_layernorm.weight', 'model.layers.1.self_attn.k_layernorm.bias', 'model.layers.8.self_attn.k_layernorm.bias', 'model.layers.31.post_layernorm.weight', 'model.layers.23.self_attn.q_layernorm.weight', 'model.layers.3.self_attn.q_layernorm.bias', 'model.layers.20.post_layernorm.weight', 'model.layers.27.post_layernorm.weight', 'model.layers.14.self_attn.q_layernorm.weight', 'model.layers.25.post_layernorm.weight', 'model.layers.18.self_attn.k_layernorm.weight', 'model.layers.18.self_attn.q_layernorm.bias', 'model.layers.2.post_layernorm.weight', 'model.layers.9.self_attn.q_layernorm.bias', 'model.layers.8.self_attn.q_layernorm.weight', 'model.layers.27.self_attn.k_layernorm.weight', 'model.layers.13.self_attn.k_layernorm.weight', 'model.layers.5.self_attn.k_layernorm.bias', 'model.layers.11.self_attn.k_layernorm.bias', 'model.layers.13.self_attn.q_layernorm.weight', 'model.layers.28.post_layernorm.weight', 'model.layers.18.self_attn.q_layernorm.weight', 'model.layers.5.post_layernorm.weight', 'model.layers.9.self_attn.k_layernorm.weight', 'model.layers.26.self_attn.q_layernorm.bias', 'model.layers.2.self_attn.k_layernorm.bias', 'model.layers.12.post_layernorm.weight', 'model.layers.1.post_layernorm.weight', 'model.layers.5.self_attn.k_layernorm.weight', 'model.layers.30.self_attn.k_layernorm.bias', 'model.layers.16.self_attn.q_layernorm.bias', 'model.layers.6.self_attn.q_layernorm.weight', 'model.layers.7.self_attn.q_layernorm.weight', 'model.layers.31.self_attn.k_layernorm.bias', 'model.layers.10.self_attn.q_layernorm.bias', 'model.layers.17.post_layernorm.weight', 'model.layers.10.self_attn.q_layernorm.weight', 'model.layers.29.self_attn.k_layernorm.weight', 'model.layers.12.self_attn.k_layernorm.bias', 'model.layers.25.self_attn.k_layernorm.weight', 'model.layers.26.self_attn.k_layernorm.bias', 'model.layers.8.post_layernorm.weight', 'model.layers.13.post_layernorm.weight', 'model.layers.1.self_attn.k_layernorm.weight', 'model.layers.15.self_attn.q_layernorm.weight', 'model.layers.0.post_layernorm.weight', 'model.layers.24.self_attn.q_layernorm.bias', 'model.layers.23.self_attn.q_layernorm.bias', 'model.layers.19.post_layernorm.weight', 'model.layers.21.self_attn.k_layernorm.weight', 'model.layers.28.self_attn.k_layernorm.bias', 'model.layers.9.post_layernorm.weight', 'model.layers.7.self_attn.k_layernorm.weight', 'model.layers.24.post_layernorm.weight', 'model.layers.19.self_attn.q_layernorm.bias', 'model.layers.14.self_attn.k_layernorm.bias'] You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. trainable params: 31457280 || all params: 2811233280 || trainable%: 1.1189850455953623 Position interpolate from 16x16 to 32x32 Loading Q-Former Loading Q-Former Done Load Minigpt-4-LLM Checkpoint: /home/data/zhaoyichen/tinygpt/TinyGPT-V_for_Stage4/TinyGPT-V_for_Stage4.pth /home/zhaoyichen/TinyGPT-V/demo_v2.py:558: GradioDeprecationWarning: 'scale' value should be an integer. Using 0.5 will cause issues. with gr.Column(scale=0.5): /home/zhaoyichen/TinyGPT-V/demo_v2.py:658: GradioDeprecationWarning: The enable_queue parameter has been deprecated. Please use the .queue() method instead. demo.launch(share=True, enable_queue=True) Running on local URL: http://127.0.0.1:7860 2024/01/09 14:51:58 [W] [service.go:132] login to server failed: dial tcp 44.237.78.176:7000: connect: connection refused

Could not create share link. Please check your internet connection or our status page: https://status.gradio.app.

chenhongming commented 9 months ago

chmod +x /home/anaconda3/envs/tinygptv/lib/python3.9/site-packages/gradio/frpc_linux_amd64_v0.2

beautyhansonboy commented 9 months ago

chmod +x /home/anaconda3/envs/tinygptv/lib/python3.9/site-packages/gradio/frpc_linux_amd64_v0.2

I generated this error after running this line of code 2024/01/10 09:46:34 [W] [service.go:132] login to server failed: dial tcp 44.237.78.176:7000: connect: connection refused