0Xiaohei0 / LocalAIVtuber

A tool for hosting AI vtubers that runs fully locally and offline.
57 stars 5 forks source link

Error: [WinError 10061] when launching the app #30

Open TrueAlexFisher opened 1 hour ago

TrueAlexFisher commented 1 hour ago

As I am not familiar with python and coding, my issue report can be weird and I apologize for that, I try my best. Thank you for your help and patience!

ISSUE: I am unable to launch the project after installation (both manual step by step and automatic via run.bat give the same result).

I am using Windows 11, Python 3.10.0 as indicated in the instruction and tried to reinstall the program 4 times: two manually step by step and two 'automaticly' via a pre-packeged file.

I expect running the project on my GPU and installed CUDA and made all the prerequisits via the manual installation guide. The result is unfortunately always the same on my PC. Please find the Terminal message (while running via run.bat or manually via cmd in project folder the message is always the same). The message I get in Russian says something as "Error: [WinError 10061] Connection not established because the endpoint computer rejected the connection request ERROR:websocket:[WinError 10061] Connection is not established because the destination computer rejected the connection request - goodbye"

Thank you!

BLABLABLA than:

Starting main.py...
checking: D:\ai\LAV_v0.2\plugins\Aya_LLM_GGUF
checking: D:\ai\LAV_v0.2\plugins\ChatGPT_Azure
checking: D:\ai\LAV_v0.2\plugins\Chess
checking: D:\ai\LAV_v0.2\plugins\gpt_sovits
checking: D:\ai\LAV_v0.2\plugins\Idle_think
checking: D:\ai\LAV_v0.2\plugins\Local_EN_to_JA
checking: D:\ai\LAV_v0.2\plugins\Local_LLM
checking: D:\ai\LAV_v0.2\plugins\No_Translate
checking: D:\ai\LAV_v0.2\plugins\Rana_LLM_gguf
checking: D:\ai\LAV_v0.2\plugins\rvc
checking: D:\ai\LAV_v0.2\plugins\silero
checking: D:\ai\LAV_v0.2\plugins\TwitchChatFetch
checking: D:\ai\LAV_v0.2\plugins\vitsTTS
checking: D:\ai\LAV_v0.2\plugins\VoiceInput
checking: D:\ai\LAV_v0.2\plugins\voicevox
checking: D:\ai\LAV_v0.2\plugins\VtubeStudio
checking: D:\ai\LAV_v0.2\plugins\YoutubeChatFetch
checking: D:\ai\LAV_v0.2\plugins\__pycache__
D:\ai\LAV_v0.2\installer_files\env\lib\site-packages\whisper\__init__.py:146: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
  checkpoint = torch.load(fp, map_location=device)
IMPORTANT: You are using gradio version 4.29.0, however version 4.44.1 is available, please upgrade.
--------
llama_model_loader: loaded meta data with 34 key-value pairs and 291 tensors from D:\ai\LAV_v0.2\plugins\Aya_LLM_GGUF\models\aya-v0.2-q4_k_m.gguf (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = llama
llama_model_loader: - kv   1:                               general.type str              = model
llama_model_loader: - kv   2:                               general.name str              = Mistral 7B Instruct v0.3
llama_model_loader: - kv   3:                            general.version str              = v0.3
llama_model_loader: - kv   4:                       general.organization str              = Mistralai
llama_model_loader: - kv   5:                           general.finetune str              = Instruct
llama_model_loader: - kv   6:                           general.basename str              = Mistral
llama_model_loader: - kv   7:                         general.size_label str              = 7B
llama_model_loader: - kv   8:                            general.license str              = apache-2.0
llama_model_loader: - kv   9:                          llama.block_count u32              = 32
llama_model_loader: - kv  10:                       llama.context_length u32              = 32768
llama_model_loader: - kv  11:                     llama.embedding_length u32              = 4096
llama_model_loader: - kv  12:                  llama.feed_forward_length u32              = 14336
llama_model_loader: - kv  13:                 llama.attention.head_count u32              = 32
llama_model_loader: - kv  14:              llama.attention.head_count_kv u32              = 8
llama_model_loader: - kv  15:                       llama.rope.freq_base f32              = 1000000.000000
llama_model_loader: - kv  16:     llama.attention.layer_norm_rms_epsilon f32              = 0.000010
llama_model_loader: - kv  17:                          general.file_type u32              = 15
llama_model_loader: - kv  18:                           llama.vocab_size u32              = 32768
llama_model_loader: - kv  19:                 llama.rope.dimension_count u32              = 128
llama_model_loader: - kv  20:            tokenizer.ggml.add_space_prefix bool             = true
llama_model_loader: - kv  21:                       tokenizer.ggml.model str              = llama
llama_model_loader: - kv  22:                         tokenizer.ggml.pre str              = default
llama_model_loader: - kv  23:                      tokenizer.ggml.tokens arr[str,32768]   = ["<unk>", "<s>", "</s>", "[INST]", "[...
llama_model_loader: - kv  24:                      tokenizer.ggml.scores arr[f32,32768]   = [-1000.000000, -1000.000000, -1000.00...
llama_model_loader: - kv  25:                  tokenizer.ggml.token_type arr[i32,32768]   = [3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, ...
llama_model_loader: - kv  26:                tokenizer.ggml.bos_token_id u32              = 1
llama_model_loader: - kv  27:                tokenizer.ggml.eos_token_id u32              = 2
llama_model_loader: - kv  28:            tokenizer.ggml.unknown_token_id u32              = 0
llama_model_loader: - kv  29:            tokenizer.ggml.padding_token_id u32              = 2
llama_model_loader: - kv  30:               tokenizer.ggml.add_bos_token bool             = true
llama_model_loader: - kv  31:               tokenizer.ggml.add_eos_token bool             = false
llama_model_loader: - kv  32:                    tokenizer.chat_template str              = {% if messages[0]['role'] == 'system'...
llama_model_loader: - kv  33:               general.quantization_version u32              = 2
llama_model_loader: - type  f32:   65 tensors
llama_model_loader: - type q4_K:  193 tensors
llama_model_loader: - type q6_K:   33 tensors
llm_load_vocab: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect
llm_load_vocab: special tokens cache size = 771
llm_load_vocab: token to piece cache size = 0.1731 MB
llm_load_print_meta: format           = GGUF V3 (latest)
llm_load_print_meta: arch             = llama
llm_load_print_meta: vocab type       = SPM
llm_load_print_meta: n_vocab          = 32768
llm_load_print_meta: n_merges         = 0
llm_load_print_meta: vocab_only       = 0
llm_load_print_meta: n_ctx_train      = 32768
llm_load_print_meta: n_embd           = 4096
llm_load_print_meta: n_layer          = 32
llm_load_print_meta: n_head           = 32
llm_load_print_meta: n_head_kv        = 8
llm_load_print_meta: n_rot            = 128
llm_load_print_meta: n_swa            = 0
llm_load_print_meta: n_embd_head_k    = 128
llm_load_print_meta: n_embd_head_v    = 128
llm_load_print_meta: n_gqa            = 4
llm_load_print_meta: n_embd_k_gqa     = 1024
llm_load_print_meta: n_embd_v_gqa     = 1024
llm_load_print_meta: f_norm_eps       = 0.0e+00
llm_load_print_meta: f_norm_rms_eps   = 1.0e-05
llm_load_print_meta: f_clamp_kqv      = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: f_logit_scale    = 0.0e+00
llm_load_print_meta: n_ff             = 14336
llm_load_print_meta: n_expert         = 0
llm_load_print_meta: n_expert_used    = 0
llm_load_print_meta: causal attn      = 1
llm_load_print_meta: pooling type     = 0
llm_load_print_meta: rope type        = 0
llm_load_print_meta: rope scaling     = linear
llm_load_print_meta: freq_base_train  = 1000000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_ctx_orig_yarn  = 32768
llm_load_print_meta: rope_finetuned   = unknown
llm_load_print_meta: ssm_d_conv       = 0
llm_load_print_meta: ssm_d_inner      = 0
llm_load_print_meta: ssm_d_state      = 0
llm_load_print_meta: ssm_dt_rank      = 0
llm_load_print_meta: ssm_dt_b_c_rms   = 0
llm_load_print_meta: model type       = 7B
llm_load_print_meta: model ftype      = Q4_K - Medium
llm_load_print_meta: model params     = 7.25 B
llm_load_print_meta: model size       = 4.07 GiB (4.83 BPW)
llm_load_print_meta: general.name     = Mistral 7B Instruct v0.3
llm_load_print_meta: BOS token        = 1 '<s>'
llm_load_print_meta: EOS token        = 2 '</s>'
llm_load_print_meta: UNK token        = 0 '<unk>'
llm_load_print_meta: PAD token        = 2 '</s>'
llm_load_print_meta: LF token         = 781 '<0x0A>'
llm_load_print_meta: EOG token        = 2 '</s>'
llm_load_print_meta: max token length = 48
llm_load_tensors: ggml ctx size =    0.14 MiB
llm_load_tensors:        CPU buffer size =  4169.52 MiB
................................................................................................
llama_new_context_with_model: n_ctx      = 32768
llama_new_context_with_model: n_batch    = 512
llama_new_context_with_model: n_ubatch   = 512
llama_new_context_with_model: flash_attn = 0
llama_new_context_with_model: freq_base  = 1000000.0
llama_new_context_with_model: freq_scale = 1
llama_kv_cache_init:        CPU KV buffer size =  4096.00 MiB
llama_new_context_with_model: KV self size  = 4096.00 MiB, K (f16): 2048.00 MiB, V (f16): 2048.00 MiB
llama_new_context_with_model:        CPU  output buffer size =     0.13 MiB
llama_new_context_with_model:        CPU compute buffer size =  2144.01 MiB
llama_new_context_with_model: graph nodes  = 1030
llama_new_context_with_model: graph splits = 1
AVX = 1 | AVX_VNNI = 0 | AVX2 = 1 | AVX512 = 1 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 1 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | RISCV_VECT = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 |
Model metadata: {'general.name': 'Mistral 7B Instruct v0.3', 'general.architecture': 'llama', 'general.type': 'model', 'general.basename': 'Mistral', 'general.finetune': 'Instruct', 'general.version': 'v0.3', 'llama.context_length': '32768', 'general.organization': 'Mistralai', 'general.size_label': '7B', 'general.license': 'apache-2.0', 'llama.block_count': '32', 'llama.embedding_length': '4096', 'llama.feed_forward_length': '14336', 'llama.attention.head_count': '32', 'tokenizer.ggml.eos_token_id': '2', 'general.file_type': '15', 'llama.attention.head_count_kv': '8', 'llama.rope.freq_base': '1000000.000000', 'llama.attention.layer_norm_rms_epsilon': '0.000010', 'llama.vocab_size': '32768', 'llama.rope.dimension_count': '128', 'tokenizer.ggml.pre': 'default', 'tokenizer.ggml.add_space_prefix': 'true', 'tokenizer.ggml.model': 'llama', 'general.quantization_version': '2', 'tokenizer.ggml.bos_token_id': '1', 'tokenizer.ggml.unknown_token_id': '0', 'tokenizer.ggml.padding_token_id': '2', 'tokenizer.ggml.add_bos_token': 'true', 'tokenizer.ggml.add_eos_token': 'false', 'tokenizer.chat_template': "{% if messages[0]['role'] == 'system' %}{% set system_message = messages[0]['content'] %}{% endif %}{{ '<s>' + system_message }}{% for message in messages %}{% set content = message['content'] %}{% if message['role'] == 'user' %}{{ '[INST] ' + content + ' [/INST]' }}{% elif message['role'] == 'assistant' %}{{ content + '</s>' }}{% endif %}{% endfor %}"}
Available chat formats from metadata: chat_template.default
Using gguf chat template: {% if messages[0]['role'] == 'system' %}{% set system_message = messages[0]['content'] %}{% endif %}{{ '<s>' + system_message }}{% for message in messages %}{% set content = message['content'] %}{% if message['role'] == 'user' %}{{ '[INST] ' + content + ' [/INST]' }}{% elif message['role'] == 'assistant' %}{{ content + '</s>' }}{% endif %}{% endfor %}
Using chat eos_token: </s>
Using chat bos_token: <s>
self.voice_configs [{'name': 'leaf', 'sovits_path': 'leaf\\leaf_e8_s136.pth', 'gpt_path': 'leaf\\s1bert25hz-2kh-longer-epoch=68e-step=50232.ckpt', 'reference_audio_path': 'leaf\\The birch canoe slid on the smooth planks..wav', 'reference_audio_text': 'The birch canoe slid on the smooth planks.Glue the sheet to the dark blue background.', 'reference_audio_language': 'en'}, {'name': 'nene', 'sovits_path': 'nene\\nene30_e8_s328.pth', 'gpt_path': 'nene\\nene30-e15.ckpt', 'reference_audio_path': 'nene\\The sun is shining brightly in the clear blue sky..wav', 'reference_audio_text': 'The sun is shining brightly in the clear blue sky.', 'reference_audio_language': 'en'}]
self.current_voice_config {'name': 'leaf', 'sovits_path': 'leaf\\leaf_e8_s136.pth', 'gpt_path': 'leaf\\s1bert25hz-2kh-longer-epoch=68e-step=50232.ckpt', 'reference_audio_path': 'leaf\\The birch canoe slid on the smooth planks..wav', 'reference_audio_text': 'The birch canoe slid on the smooth planks.Glue the sheet to the dark blue background.', 'reference_audio_language': 'en'}
{'name': 'leaf', 'sovits_path': 'leaf\\leaf_e8_s136.pth', 'gpt_path': 'leaf\\s1bert25hz-2kh-longer-epoch=68e-step=50232.ckpt', 'reference_audio_path': 'leaf\\The birch canoe slid on the smooth planks..wav', 'reference_audio_text': 'The birch canoe slid on the smooth planks.Glue the sheet to the dark blue background.', 'reference_audio_language': 'en'}
init(sovits_path, gpt_path, D:\ai\LAV_v0.2\plugins\gpt_sovits\models\leaf\The birch canoe slid on the smooth planks..wav, The birch canoe slid on the smooth planks.Glue the sheet to the dark blue background., en)
默认参考音频路径: D:\ai\LAV_v0.2\plugins\gpt_sovits\models\leaf\The birch canoe slid on the smooth planks..wav
默认参考音频文本: The birch canoe slid on the smooth planks.Glue the sheet to the dark blue background.
默认参考音频语种: en
Some weights of the model checkpoint at D:\ai\LAV_v0.2\plugins\gpt_sovits\GPT_SoVITS\./pretrained_models/chinese-hubert-base were not used when initializing HubertModel: ['encoder.pos_conv_embed.conv.weight_g', 'encoder.pos_conv_embed.conv.weight_v']
- This IS expected if you are initializing HubertModel from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing HubertModel from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
Some weights of HubertModel were not initialized from the model checkpoint at D:\ai\LAV_v0.2\plugins\gpt_sovits\GPT_SoVITS\./pretrained_models/chinese-hubert-base and are newly initialized: ['encoder.pos_conv_embed.conv.parametrizations.weight.original0', 'encoder.pos_conv_embed.conv.parametrizations.weight.original1']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
D:\ai\LAV_v0.2\plugins\gpt_sovits\GPT_SoVITS\api_direct.py:172: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
  dict_s2 = torch.load(sovits_path, map_location="cpu")
D:\ai\LAV_v0.2\installer_files\env\lib\site-packages\torch\nn\utils\weight_norm.py:134: FutureWarning: `torch.nn.utils.weight_norm` is deprecated in favor of `torch.nn.utils.parametrizations.weight_norm`.
  WeightNorm.apply(module, name, dim)
D:\ai\LAV_v0.2\plugins\gpt_sovits\GPT_SoVITS\api_direct.py:196: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
  dict_s1 = torch.load(gpt_path, map_location="cpu")
init finished, time elapsed: 21.16550350189209
Token Found, attempting to authenticate with token...
D:\ai\LAV_v0.2\installer_files\env\lib\site-packages\gradio\components\base.py:181: UserWarning: show_label has no effect when container is False.
  warnings.warn("show_label has no effect when container is False.")
D:\ai\LAV_v0.2\installer_files\env\lib\site-packages\gradio\interface.py:377: UserWarning: The `allow_flagging` parameter in `Interface` nowtakes a string value ('auto', 'manual', or 'never'), not a boolean. Setting parameter to: 'never'.
  warnings.warn(
IMPORTANT: You are using gradio version 4.29.0, however version 4.44.1 is available, please upgrade.
--------
IMPORTANT: You are using gradio version 4.29.0, however version 4.44.1 is available, please upgrade.
--------
IMPORTANT: You are using gradio version 4.29.0, however version 4.44.1 is available, please upgrade.
--------
IMPORTANT: You are using gradio version 4.29.0, however version 4.44.1 is available, please upgrade.
--------
IMPORTANT: You are using gradio version 4.29.0, however version 4.44.1 is available, please upgrade.
--------
IMPORTANT: You are using gradio version 4.29.0, however version 4.44.1 is available, please upgrade.
--------
IMPORTANT: You are using gradio version 4.29.0, however version 4.44.1 is available, please upgrade.
--------
IMPORTANT: You are using gradio version 4.29.0, however version 4.44.1 is available, please upgrade.
--------
IMPORTANT: You are using gradio version 4.29.0, however version 4.44.1 is available, please upgrade.
--------
IMPORTANT: You are using gradio version 4.29.0, however version 4.44.1 is available, please upgrade.
--------
IMPORTANT: You are using gradio version 4.29.0, however version 4.44.1 is available, please upgrade.
--------
IMPORTANT: You are using gradio version 4.29.0, however version 4.44.1 is available, please upgrade.
--------
Running on local URL:  http://127.0.0.1:7860
IMPORTANT: You are using gradio version 4.29.0, however version 4.44.1 is available, please upgrade.
--------
IMPORTANT: You are using gradio version 4.29.0, however version 4.44.1 is available, please upgrade.
--------
IMPORTANT: You are using gradio version 4.29.0, however version 4.44.1 is available, please upgrade.
--------

To create a public link, set `share=True` in `launch()`.
IMPORTANT: You are using gradio version 4.29.0, however version 4.44.1 is available, please upgrade.
--------
IMPORTANT: You are using gradio version 4.29.0, however version 4.44.1 is available, please upgrade.
--------
IMPORTANT: You are using gradio version 4.29.0, however version 4.44.1 is available, please upgrade.
--------
IMPORTANT: You are using gradio version 4.29.0, however version 4.44.1 is available, please upgrade.
--------
Error: [WinError 10061] Подключение не установлено, т.к. конечный компьютер отверг запрос на подключение
ERROR:websocket:[WinError 10061] Подключение не установлено, т.к. конечный компьютер отверг запрос на подключение - goodbye
### Connection closed ###
Failed to connect to vtube studio, if you want vtube studio functionalities, please start vtube studio and enable plugins.
You pressed the 'ctrl+a' key!
Interrupting pipeline
0Xiaohei0 commented 1 hour ago

It's already running, just go to http://localhost:7860/ in your browser

0Xiaohei0 commented 1 hour ago

The error is for vtube studio plugin, and if you are not using vtube studio don't worry about it