Starting server
To see the GUI go to: http://127.0.0.1:8188
got prompt
[Ollama Vision]
request query params:
- query: Please, in conjunction with the protagonist's traits as depicted in the accompanying image and the overall stylistic ambiance of the scene, conceptualize a subtle action for them to perform. The description should be provided in English, ensuring it does not exceed 200 words.
- url: http://127.0.0.1:11434
- model: llava:latest
- extra_model: none
- options: {'seed': 710906439665594}
- keep_alive: 0
loading model: llava:latest
HTTP Request: POST http://127.0.0.1:11434/api/generate "HTTP/1.1 200 OK"
/home/mark/.conda/envs/comfy-env/lib/python3.11/site-packages/transformers/tokenization_utils_base.py:1601: FutureWarning: `clean_up_tokenization_spaces` was not set. It will be set to `True` by default. This behavior will be depracted in transformers v4.45, and will be then set to `False` by default. For more details check this issue: https://github.com/huggingface/transformers/issues/31884
warnings.warn(
Requested to load SD3ClipModel_
Loading 1 new model
loaded completely 0.0 4541.693359375 True
transformer type: fun_5b
GGUF: True
model weight dtype: torch.bfloat16 manual cast dtype: torch.bfloat16
CompletedProcess(args=['/home/mark/.conda/envs/comfy-env/bin/python3.11', 'main.py'], returncode=-9) # 到这里就不一样了
我还找到了之前还好用时的终端提示:
Update check done.
got prompt
[Ollama Vision]
request query params:
- query: Please, in conjunction with the protagonist's traits as depicted in the accompanying image and the overall stylistic ambiance of the scene, conceptualize a subtle action for them to perform. The description should be provided in English, ensuring it does not exceed 200 words.
- url: http://127.0.0.1:11434
- model: llava:latest
- extra_model: none
- options: {'seed': 710906439665594}
- keep_alive: 0
loading model: llava:latest
HTTP Request: POST http://127.0.0.1:11434/api/generate "HTTP/1.1 200 OK"
/home/mark/.conda/envs/comfy-env/lib/python3.11/site-packages/transformers/tokenization_utils_base.py:1601: FutureWarning: `clean_up_tokenization_spaces` was not set. It will be set to `True` by default. This behavior will be depracted in transformers v4.45, and will be then set to `False` by default. For more details check this issue: https://github.com/huggingface/transformers/issues/31884
warnings.warn(
Requested to load SD3ClipModel_
Loading 1 new model
loaded completely 0.0 4541.693359375 True
transformer type: fun_5b
GGUF: True
model weight dtype: torch.bfloat16 manual cast dtype: torch.bfloat16 # 从这里往下的信息应该是正确的,出错的那边就不一样
Closest size: 240x416
The config attributes {'snr_shift_scale': 1.0} were passed to DPMSolverMultistepScheduler, but are not expected and will be ignored. Please verify your scheduler_config.json configuration file.
/home/mark/.conda/envs/comfy-env/lib/python3.11/site-packages/diffusers/configuration_utils.py:140: FutureWarning: Accessing config attribute `vae_latent_channels` directly via 'VaeImageProcessor' object attribute is deprecated. Please access 'vae_latent_channels' over 'VaeImageProcessor's config object instead, e.g. 'scheduler.config.vae_latent_channels'.
deprecate("direct config name access", "1.0.0", deprecation_message, standard_warn=False)
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 25/25 [00:42<00:00, 1.70s/it]
torch.Size([1, 10, 16, 52, 30])
Prompt executed in 76.63 seconds
https://github.com/user-attachments/assets/8d02dc13-42d0-469e-b86c-46ccd24a6b5a
https://github.com/user-attachments/assets/9de83f0d-a301-4aa0-90d4-fd8d6337ca07
你好,事情是这样的。 当时我在测试如何放大视频,生成这两个视频后我安装了更新,然后就不好用了。 如下是刚刚又试了一次〔把视频拖进去,相同的工作流相同的种子〕,出错后的终端信息:
我还找到了之前还好用时的终端提示:
我记得当时同时更新的还有 CogVideoXWrapper ,不知道是否相关,帮帮查查看好吗,谢谢你啦~