shadowcz007 / comfyui-mixlab-nodes

Workflow-to-APP、ScreenShare&FloatingVideo、GPT & 3D、SpeechRecognition&TTS
https://mixlabnodes.com
MIT License
1.13k stars 68 forks source link

开启 本地LLM 失败 #220

Closed easylolicon closed 4 months ago

easylolicon commented 4 months ago

Available chat formats from metadata: chat_template.default Exception in thread Thread-5 (run_uvicorn): Traceback (most recent call last): File "/www/miniconda/envs/comfyui/lib/python3.10/logging/config.py", line 544, in configure formatters[name] = self.configure_formatter( File "/www/miniconda/envs/comfyui/lib/python3.10/logging/config.py", line 656, in configure_formatter result = self.configure_custom(config) File "/www/miniconda/envs/comfyui/lib/python3.10/logging/config.py", line 475, in configure_custom result = c(**kwargs) File "/www/miniconda/envs/comfyui/lib/python3.10/site-packages/uvicorn/logging.py", line 42, in init self.use_colors = sys.stdout.isatty() AttributeError: 'ComfyUIManagerLogger' object has no attribute 'isatty'

The above exception was the direct cause of the following exception:

Traceback (most recent call last): File "/www/miniconda/envs/comfyui/lib/python3.10/threading.py", line 1016, in _bootstrap_inner self.run() File "/www/miniconda/envs/comfyui/lib/python3.10/threading.py", line 953, in run self._target(*self._args, **self._kwargs) File "/www/ComfyUI/custom_nodes/comfyui-mixlab-nodes/init.py", line 695, in run_uvicorn uvicorn.run( File "/www/miniconda/envs/comfyui/lib/python3.10/site-packages/uvicorn/main.py", line 513, in run config = Config( File "/www/miniconda/envs/comfyui/lib/python3.10/site-packages/uvicorn/config.py", line 272, in init self.configure_logging() File "/www/miniconda/envs/comfyui/lib/python3.10/site-packages/uvicorn/config.py", line 364, in configure_logging logging.config.dictConfig(self.log_config) File "/www/miniconda/envs/comfyui/lib/python3.10/logging/config.py", line 811, in dictConfig dictConfigClass(config).configure() File "/www/miniconda/envs/comfyui/lib/python3.10/logging/config.py", line 547, in configure raise ValueError('Unable to configure ' ValueError: Unable to configure formatter 'default' Disable Log Console. client id: 61113896c1d44001aa532301d46a1694, console id: aa9e02f1-b460-467e-a8ec-3569c2f3b338

doggeddalle commented 4 months ago

Getting the same 'isatty' issue, after all packages needed installed.

Maybe a bug.

citkane commented 4 months ago

I am getting the same issue. Comyui boots perfectly with no errors, but when I click on a model name from the "Mixlab" pop-up then errors occur. A link appears as a button saying http://127.0.0.1:9090, but there is nothing at that url. I am using the "Prompt Generate" node, which appears to still function correctly, but it seems not possible to switch llamafile models because of the error.

Click to expand the log ```bash FETCH DATA from: /home/michaeladmin/StableDiffusion/ComfyUI/custom_nodes/ComfyUI-Manager/extension-node-map.json #read_workflow_json_files_all /home/michaeladmin/StableDiffusion/ComfyUI/custom_nodes/comfyui-mixlab-nodes/app/ /mixlab/folder_paths False 'llamafile' llama_model_loader: loaded meta data with 22 key-value pairs and 195 tensors from /home/michaeladmin/StableDiffusion/ComfyUI/models/llamafile/Phi-3-mini-4k-instruct-fp16.gguf (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = phi3 llama_model_loader: - kv 1: general.name str = Phi3 llama_model_loader: - kv 2: phi3.context_length u32 = 4096 llama_model_loader: - kv 3: phi3.embedding_length u32 = 3072 llama_model_loader: - kv 4: phi3.feed_forward_length u32 = 8192 llama_model_loader: - kv 5: phi3.block_count u32 = 32 llama_model_loader: - kv 6: phi3.attention.head_count u32 = 32 llama_model_loader: - kv 7: phi3.attention.head_count_kv u32 = 32 llama_model_loader: - kv 8: phi3.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 9: phi3.rope.dimension_count u32 = 96 llama_model_loader: - kv 10: general.file_type u32 = 1 llama_model_loader: - kv 11: tokenizer.ggml.model str = llama llama_model_loader: - kv 12: tokenizer.ggml.tokens arr[str,32064] = ["", "", "", "<0x00>", "<... llama_model_loader: - kv 13: tokenizer.ggml.scores arr[f32,32064] = [0.000000, 0.000000, 0.000000, 0.0000... llama_model_loader: - kv 14: tokenizer.ggml.token_type arr[i32,32064] = [2, 3, 3, 6, 6, 6, 6, 6, 6, 6, 6, 6, ... llama_model_loader: - kv 15: tokenizer.ggml.bos_token_id u32 = 1 llama_model_loader: - kv 16: tokenizer.ggml.eos_token_id u32 = 32000 llama_model_loader: - kv 17: tokenizer.ggml.unknown_token_id u32 = 0 llama_model_loader: - kv 18: tokenizer.ggml.padding_token_id u32 = 32000 llama_model_loader: - kv 19: tokenizer.ggml.add_bos_token bool = true llama_model_loader: - kv 20: tokenizer.ggml.add_eos_token bool = false llama_model_loader: - kv 21: tokenizer.chat_template str = {{ bos_token }}{% for message in mess... llama_model_loader: - type f32: 65 tensors llama_model_loader: - type f16: 130 tensors llm_load_vocab: special tokens definition check successful ( 323/32064 ). llm_load_print_meta: format = GGUF V3 (latest) llm_load_print_meta: arch = phi3 llm_load_print_meta: vocab type = SPM llm_load_print_meta: n_vocab = 32064 llm_load_print_meta: n_merges = 0 llm_load_print_meta: n_ctx_train = 4096 llm_load_print_meta: n_embd = 3072 llm_load_print_meta: n_head = 32 llm_load_print_meta: n_head_kv = 32 llm_load_print_meta: n_layer = 32 llm_load_print_meta: n_rot = 96 llm_load_print_meta: n_embd_head_k = 96 llm_load_print_meta: n_embd_head_v = 96 llm_load_print_meta: n_gqa = 1 llm_load_print_meta: n_embd_k_gqa = 3072 llm_load_print_meta: n_embd_v_gqa = 3072 llm_load_print_meta: f_norm_eps = 0.0e+00 llm_load_print_meta: f_norm_rms_eps = 1.0e-05 llm_load_print_meta: f_clamp_kqv = 0.0e+00 llm_load_print_meta: f_max_alibi_bias = 0.0e+00 llm_load_print_meta: f_logit_scale = 0.0e+00 llm_load_print_meta: n_ff = 8192 llm_load_print_meta: n_expert = 0 llm_load_print_meta: n_expert_used = 0 llm_load_print_meta: causal attn = 1 llm_load_print_meta: pooling type = 0 llm_load_print_meta: rope type = 2 llm_load_print_meta: rope scaling = linear llm_load_print_meta: freq_base_train = 10000.0 llm_load_print_meta: freq_scale_train = 1 llm_load_print_meta: n_yarn_orig_ctx = 4096 llm_load_print_meta: rope_finetuned = unknown llm_load_print_meta: ssm_d_conv = 0 llm_load_print_meta: ssm_d_inner = 0 llm_load_print_meta: ssm_d_state = 0 llm_load_print_meta: ssm_dt_rank = 0 llm_load_print_meta: model type = 3B llm_load_print_meta: model ftype = F16 llm_load_print_meta: model params = 3.82 B llm_load_print_meta: model size = 7.12 GiB (16.00 BPW) llm_load_print_meta: general.name = Phi3 llm_load_print_meta: BOS token = 1 '' llm_load_print_meta: EOS token = 32000 '<|endoftext|>' llm_load_print_meta: UNK token = 0 '' llm_load_print_meta: PAD token = 32000 '<|endoftext|>' llm_load_print_meta: LF token = 13 '<0x0A>' llm_load_print_meta: EOT token = 32007 '<|end|>' ggml_cuda_init: GGML_CUDA_FORCE_MMQ: yes ggml_cuda_init: CUDA_USE_TENSOR_CORES: no ggml_cuda_init: found 1 CUDA devices: Device 0: NVIDIA GeForce RTX 2080 Ti, compute capability 7.5, VMM: yes llm_load_tensors: ggml ctx size = 0.22 MiB llm_load_tensors: offloading 32 repeating layers to GPU llm_load_tensors: offloading non-repeating layers to GPU llm_load_tensors: offloaded 33/33 layers to GPU llm_load_tensors: CPU buffer size = 187.88 MiB llm_load_tensors: CUDA0 buffer size = 7100.64 MiB .................................................................................... llama_new_context_with_model: n_ctx = 4352 llama_new_context_with_model: n_batch = 512 llama_new_context_with_model: n_ubatch = 512 llama_new_context_with_model: flash_attn = 0 llama_new_context_with_model: freq_base = 10000.0 llama_new_context_with_model: freq_scale = 1 llama_kv_cache_init: CUDA0 KV buffer size = 1632.00 MiB llama_new_context_with_model: KV self size = 1632.00 MiB, K (f16): 816.00 MiB, V (f16): 816.00 MiB llama_new_context_with_model: CUDA_Host output buffer size = 0.13 MiB llama_new_context_with_model: CUDA0 compute buffer size = 316.50 MiB llama_new_context_with_model: CUDA_Host compute buffer size = 14.51 MiB llama_new_context_with_model: graph nodes = 1286 llama_new_context_with_model: graph splits = 2 AVX = 1 | AVX_VNNI = 0 | AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 1 | NEON = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | Model metadata: {'tokenizer.chat_template': "{{ bos_token }}{% for message in messages %}{{'<|' + message['role'] + '|>' + '\n' + message['content'] + '<|end|>\n' }}{% endfor %}{% if add_generation_prompt %}{{ '<|assistant|>\n' }}{% else %}{{ eos_token }}{% endif %}", 'tokenizer.ggml.add_eos_token': 'false', 'tokenizer.ggml.add_bos_token': 'true', 'tokenizer.ggml.padding_token_id': '32000', 'tokenizer.ggml.eos_token_id': '32000', 'general.name': 'Phi3', 'general.architecture': 'phi3', 'phi3.context_length': '4096', 'phi3.attention.head_count_kv': '32', 'phi3.embedding_length': '3072', 'tokenizer.ggml.unknown_token_id': '0', 'phi3.feed_forward_length': '8192', 'phi3.attention.layer_norm_rms_epsilon': '0.000010', 'phi3.block_count': '32', 'tokenizer.ggml.bos_token_id': '1', 'phi3.attention.head_count': '32', 'phi3.rope.dimension_count': '96', 'tokenizer.ggml.model': 'llama', 'general.file_type': '1'} Available chat formats from metadata: chat_template.default Exception in thread Thread-6 (run_uvicorn): Traceback (most recent call last): File "/usr/lib/python3.11/logging/config.py", line 541, in configure formatters[name] = self.configure_formatter( ^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3.11/logging/config.py", line 653, in configure_formatter result = self.configure_custom(config) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3.11/logging/config.py", line 472, in configure_custom result = c(**kwargs) ^^^^^^^^^^^ File "/home/michaeladmin/StableDiffusion/ComfyUI/comfyui-env/lib/python3.11/site-packages/uvicorn/logging.py", line 42, in __init__ self.use_colors = sys.stdout.isatty() ^^^^^^^^^^^^^^^^^ AttributeError: 'ComfyUIManagerLogger' object has no attribute 'isatty' The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/usr/lib/python3.11/threading.py", line 1038, in _bootstrap_inner self.run() File "/usr/lib/python3.11/threading.py", line 975, in run self._target(*self._args, **self._kwargs) File "/home/michaeladmin/StableDiffusion/ComfyUI/custom_nodes/comfyui-mixlab-nodes/__init__.py", line 695, in run_uvicorn uvicorn.run( File "/home/michaeladmin/StableDiffusion/ComfyUI/comfyui-env/lib/python3.11/site-packages/uvicorn/main.py", line 513, in run config = Config( ^^^^^^^ File "/home/michaeladmin/StableDiffusion/ComfyUI/comfyui-env/lib/python3.11/site-packages/uvicorn/config.py", line 272, in __init__ self.configure_logging() File "/home/michaeladmin/StableDiffusion/ComfyUI/comfyui-env/lib/python3.11/site-packages/uvicorn/config.py", line 364, in configure_logging logging.config.dictConfig(self.log_config) File "/usr/lib/python3.11/logging/config.py", line 812, in dictConfig dictConfigClass(config).configure() File "/usr/lib/python3.11/logging/config.py", line 544, in configure raise ValueError('Unable to configure ' ValueError: Unable to configure formatter 'default' ```

Screenshot from 2024-05-10 17-03-05

easylolicon commented 4 months ago

我将出错的 isatty() 注释 强制 self.use_colors = False 解决了问题

image

当然后续发现代码中固定127.0.0.1而我访问的局域网服务则又修改了相关的 ip 与 address 文件 如果本机使用(127.0.0.1)则忽略后面这段话

shadowcz007 commented 4 months ago

感谢反馈。已经修复:

self.use_colors = sys.stdout.isatty() AttributeError: 'ComfyUIManagerLogger' object has no attribute 'isatty'