Closed reefor closed 2 weeks ago
Performance issues are typically due to hardware specifications so I cannot comment on those. To diagnose the other issues, please provide the full log first.
Performance issues are typically due to hardware specifications so I cannot comment on those. To diagnose the other issues, please provide the full log first.
I'd love to, but I can't find the log. If you can tell me where to look I'll post it. I'm running Comfy Portable and can't seem to find the folder.
If you're talking about the basic boot up log, this is it. I can't seem to find any kind of an error log though.
[2024-11-09 18:40:55.551] ComfyUI startup time: 2024-11-09 18:40:55.551367
[2024-11-09 18:40:55.629] Platform: Windows
[2024-11-09 18:40:55.629] Python version: 3.12.7 (tags/v3.12.7:0b05ead, Oct 1 2024, 03:06:41) [MSC v.1941 64 bit (AMD64)]
[2024-11-09 18:40:55.629] Python executable: C:\Users\Paul\Desktop\Comfyui\ComfyUI_windows_portable\python_embeded\python.exe
[2024-11-09 18:40:55.629] ComfyUI Path: C:\Users\Paul\Desktop\Comfyui\ComfyUI_windows_portable\ComfyUI
[2024-11-09 18:40:55.629] Log path: C:\Users\Paul\Desktop\Comfyui\ComfyUI_windows_portable\comfyui.log
[2024-11-09 18:40:55.645]
Prestartup times for custom nodes:
[2024-11-09 18:40:55.645] 0.0 seconds: C:\Users\Paul\Desktop\Comfyui\ComfyUI_windows_portable\ComfyUI\custom_nodes\rgthree-comfy
[2024-11-09 18:40:55.645] 3.1 seconds: C:\Users\Paul\Desktop\Comfyui\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Manager
[2024-11-09 18:40:55.645]
Total VRAM 6144 MB, total RAM 16271 MB
[2024-11-09 18:41:06.056] pytorch version: 2.5.1+cu124
[2024-11-09 18:41:06.087] Set vram state to: NORMAL_VRAM
[2024-11-09 18:41:06.087] Device: cuda:0 NVIDIA GeForce GTX 1060 : cudaMallocAsync
[2024-11-09 18:41:11.679] Using pytorch cross attention
[2024-11-09 18:41:20.333] [Prompt Server] web root: C:\Users\Paul\Desktop\Comfyui\ComfyUI_windows_portable\ComfyUI\web
[2024-11-09 18:41:26.698] ### Loading: ComfyUI-Impact-Pack (V7.10.6)
[2024-11-09 18:41:26.980] ### Loading: ComfyUI-Impact-Pack (Subpack: V0.7)
[2024-11-09 18:41:27.198] [Impact Pack] Wildcards loading done.
[2024-11-09 18:41:28.651] C:\Users\Paul\Desktop\Comfyui\ComfyUI_windows_portable\python_embeded\Lib\site-packages\albumentations__init__.py:13: UserWarning: A new version of Albumentations is available: 1.4.21 (you have 1.4.15). Upgrade using: pip install -U albumentations. To disable automatic update checks, set the environment variable NO_ALBUMENTATIONS_UPDATE to 1.
check_for_updates()
[2024-11-09 18:41:29.729] C:\Users\Paul\Desktop\Comfyui\ComfyUI_windows_portable\python_embeded\Lib\site-packages\timm\models\layers__init.py:48: FutureWarning: Importing from timm.models.layers is deprecated, please import via timm.layers
warnings.warn(f"Importing from {name} is deprecated, please import via timm.layers", FutureWarning)
[2024-11-09 18:41:32.372] Total VRAM 6144 MB, total RAM 16271 MB
[2024-11-09 18:41:32.372] pytorch version: 2.5.1+cu124
[2024-11-09 18:41:32.388] Set vram state to: NORMAL_VRAM
[2024-11-09 18:41:32.388] Device: cuda:0 NVIDIA GeForce GTX 1060 : cudaMallocAsync
[2024-11-09 18:41:32.591] ### Loading: ComfyUI-Manager (V2.51.9)
[2024-11-09 18:41:32.887] ### ComfyUI Revision: 2817 [6ee066a1] | Released on '2024-11-08'
[2024-11-09 18:41:33.153] ------------------------------------------
[2024-11-09 18:41:33.215] [34m### N-Suite Revision:[0m [32mae7cc848 [0m
[2024-11-09 18:41:33.231] Current version of packaging: 23.2
[2024-11-09 18:41:33.262] Version of cpuinfo: Not found
[2024-11-09 18:41:33.262] Current version of git: 3.1.43[ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/model-list.json
[2024-11-09 18:41:33.262]
[2024-11-09 18:41:33.278] [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/alter-list.json
[2024-11-09 18:41:33.278] Current version of moviepy: 1.0.3
[2024-11-09 18:41:33.278] Current version of cv2: 4.10.0
[2024-11-09 18:41:33.294] [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/github-stats.json
[2024-11-09 18:41:33.356] [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/custom-node-list.json
[2024-11-09 18:41:33.387] [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/extension-node-map.json
[2024-11-09 18:41:33.996] Current version of skbuild: 0.18.1
[2024-11-09 18:41:33.996] Version of typing: Not found
[2024-11-09 18:41:34.043] Current version of diskcache: 5.6.3
[2024-11-09 18:41:34.043] Installing llama_cpp...
[2024-11-09 18:41:36.304] Python version: None
[2024-11-09 18:41:36.304] OS: Windows
[2024-11-09 18:41:36.304] OS bit: 64
[2024-11-09 18:41:36.304] Platform tag: cp312-cp312-win_amd64
[2024-11-09 18:41:36.304] Unsupported Python version. Please use Python 3.9, 3.10 or 3.11.
[2024-11-09 18:41:36.304] Current version of timm: 1.0.11
[2024-11-09 18:41:36.492] Traceback (most recent call last):
[2024-11-09 18:41:36.492] File "C:\Users\Paul\Desktop\Comfyui\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-N-Nodes\init__.py", line 64, in
Import times for custom nodes: [2024-11-09 18:41:38.070] 0.0 seconds: C:\Users\Paul\Desktop\Comfyui\ComfyUI_windows_portable\ComfyUI\custom_nodes\websocket_image_save.py [2024-11-09 18:41:38.070] 0.0 seconds: C:\Users\Paul\Desktop\Comfyui\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Img2PaintingAssistant [2024-11-09 18:41:38.070] 0.1 seconds: C:\Users\Paul\Desktop\Comfyui\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Frame-Interpolation [2024-11-09 18:41:38.070] 0.1 seconds: C:\Users\Paul\Desktop\Comfyui\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-LivePortraitKJ [2024-11-09 18:41:38.070] 0.1 seconds: C:\Users\Paul\Desktop\Comfyui\ComfyUI_windows_portable\ComfyUI\custom_nodes\rgthree-comfy [2024-11-09 18:41:38.070] 0.2 seconds: C:\Users\Paul\Desktop\Comfyui\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_essentials [2024-11-09 18:41:38.070] 0.3 seconds: C:\Users\Paul\Desktop\Comfyui\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-VideoHelperSuite [2024-11-09 18:41:38.070] 0.5 seconds: C:\Users\Paul\Desktop\Comfyui\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Impact-Pack [2024-11-09 18:41:38.070] 0.6 seconds: C:\Users\Paul\Desktop\Comfyui\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Manager [2024-11-09 18:41:38.070] 1.0 seconds: C:\Users\Paul\Desktop\Comfyui\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-KJNodes [2024-11-09 18:41:38.085] 2.9 seconds: C:\Users\Paul\Desktop\Comfyui\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-AdvancedLivePortrait [2024-11-09 18:41:38.085] 4.2 seconds: C:\Users\Paul\Desktop\Comfyui\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Inspyrenet-Rembg [2024-11-09 18:41:38.085] 4.3 seconds: C:\Users\Paul\Desktop\Comfyui\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-N-Nodes [2024-11-09 18:41:38.085] [2024-11-09 18:41:38.101] Starting server
I also recently ran the 3 update bat files. (update_comfyui, update_comfyui_and_python_dependencies, & update_comfyui_stable)
this fixed that issue for me
pip install llama-cpp-python==0.1.78 --no-cache-dir
you should consider more vram that's no good for videos at all, maybe try running using cloud service
"C:\Users\Paul\Desktop\Comfyui\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-N-Nodes\py\gptcpp_node.py", line 4, in
from llama_cpp import Llama
This log indicates that this is an issue that should be addressed in ComfyUI-N-Nodes repo. And for some reason, the dependencies were not installed properly during installation process.
this fixed that issue for me
pip install llama-cpp-python==0.1.78 --no-cache-dir
you should consider more vram that's no good for videos at all, maybe try running using cloud service
Thanks, I guess that's a requirement of the comfy-n-nodes, I'll post a msg there so they can add it to the requirements.txt. Do I install that I'm guessing I just run that in the main folder (C:\Users\Paul\Desktop\Comfyui\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-N-Nodes), or do I run it in one of the folders in the main folder? Or do I install it in the PY folder?
I know I need more vram, my laptop is old. I'm trying to save for a new one now. Till then I just have to try to muttle through.
"C:\Users\Paul\Desktop\Comfyui\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-N-Nodes\py\gptcpp_node.py", line 4, in from llama_cpp import Llama
This log indicates that this is an issue that should be addressed in ComfyUI-N-Nodes repo. And for some reason, the dependencies were not installed properly during installation process.
Thanks, I'm trying to fix that now. And now that I see the line above that indicates it's in the N Nodes I can hopefully do better reading this kind of thing from now on.
I don't see a custom node listed above the Python error. I'm running portable, does portable use Cuda instead of Python? Or is it something else?
this fixed that issue for me pip install llama-cpp-python==0.1.78 --no-cache-dir you should consider more vram that's no good for videos at all, maybe try running using cloud service
Thanks, I guess that's a requirement of the comfy-n-nodes, I'll post a msg there so they can add it to the requirements.txt. Do I install that I'm guessing I just run that in the main folder (C:\Users\Paul\Desktop\Comfyui\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-N-Nodes), or do I run it in one of the folders in the main folder? Or do I install it in the PY folder?
I know I need more vram, my laptop is old. I'm trying to save for a new one now. Till then I just have to try to muttle through.
Nevermind, I figured it out. The same folder as the requirement.txt file I'm guessing. Hope that's right :)
Your question
I have 2 or 3 issues, I'm using Comfy Portable on an old Windows machine.
I don't think I've ever been able to install llama. I get the error message below when I boot up Comfy but it still runs so I never worried about it. Is llama important and do I need to install it. If so, can someone please tell me which folder to do the CMD install? This is the error message: from llama_cpp import Llama ModuleNotFoundError: No module named 'llama_cpp'
This one is new and more concerning. This error message just started today. I know nothing about Python, do I need to upgrade it? Is it as simple as finding it and doing the PIP install in a certain folder via CMD? If so, what version should I look for and what folder to I install it in? Unsupported Python version. Please use Python 3.9, 3.10 or 3.11.
The problems I notice: It's always run slow. For example, when using Liveportrait vid2vid I can only do 2 second segments at a time, anything more will lock up my computer. And it takes about a half hour to do a 2 second clip. I always just assumed it's because I have an old system and not enough vram & ram.
The new problem since downloading the latest version of portable is that the webui locks up. I have to drop in the Liveportrait workflow then refresh the page before it will unlock. By locking up I mean I can't do anything on the main screen but the right hand side with the nodes manager still works. Not sure what's causing it and there are no error messages to point me in the right direction.
Anyway, if someone can shed some light on the llama & python version things, I'd really appreciate it
Logs
Other
No response