prodogape / ComfyUI-clip-interrogator

Unofficial ComfyUI custom nodes of clip-interrogator
MIT License
49 stars 5 forks source link

重新部署comfyui。现在无论如何都无法使用 #3

Open wibur0620 opened 7 months ago

wibur0620 commented 7 months ago

不下载模型, settings in ComfyUI. Load model: EVA01-g-14/laion400m_s11b_b41k Loading caption model blip-large... Loading CLIP model EVA01-g-14/laion400m_s11b_b41k... Loaded EVA01-g-14 model config. Unknown model (eva_giant_patch14_224) Prompt executed in 5.09 seconds 提示以上内容,又不报错

wibur0620 commented 7 months ago

重新部署comfyui之前使用过的模型。现在都无法使用。选择没有使用过的模型。才会自动下载 Load model: RN101-quickgelu/openai Loading caption model blip-large... Loading CLIP model RN101-quickgelu/openai... Loading pretrained RN101-quickgelu from OpenAI. 4%|█▌ | 11.3M/292M [00:12<04:44, 985kiB/s]

wibur0620 commented 7 months ago

位置:ComfyUI\models\clip ComfyUI\cache 这两个位置,是自动下载模型。存放的地方

wibur0620 commented 7 months ago

现在的情况就是。之前使用过的。感觉插件不知道我模型已经删除了。comfui都重新部署了。插件还在正常加载模型。然后又加载不上。也不自动下载模型。很奇怪为什么会这样。之前没有下载使用过的模型,能正常下载使用。感觉这个插件也没有往c盘缓存模型啊,

wibur0620 commented 7 months ago

这是"ComfyUI\custom_nodes\ComfyUI-clip-interrogator\module\inference.py"文件的内容

Reference to: https://github.com/pharmapsychotic/clip-interrogator-ext/blob/main/scripts/clip_interrogator_ext.py

import os import yaml import torch from torchvision import transforms from PIL import Image from clip_interrogator import Config, Interrogator

class CI_Inference: ci_model: Interrogator = None model_name: str mode_list = ["best", "classic", "fast", "negative"] cache_dir: str model_names: dict model_list: list

def __init__(self):
    with open(os.path.join(os.path.dirname(os.path.abspath(__file__)), f"../model_config.yaml")) as fp:
        model_config = yaml.load(fp, Loader=yaml.FullLoader)
    self.cache_dir = model_config["cache_dir"]
    self.model_names = model_config["model_names"]
    self.model_list = list(self.model_names.keys())

def _load_model(self, model_name):
    if not (self.ci_model and model_name == self.ci_model.config.clip_model_name):
        print(f"Load model: {model_name}")
        config = Config(
            # device="cuda",
            clip_model_path=self.cache_dir,
            clip_model_name=model_name,
        )
        self.ci_model = Interrogator(config)

def _interrogate(self, image, mode, caption=None):
    if mode == 'best':
        prompt = self.ci_model.interrogate(image, caption=caption)
    elif mode == 'classic':
        prompt = self.ci_model.interrogate_classic(image, caption=caption)
    elif mode == 'fast':
        prompt = self.ci_model.interrogate_fast(image, caption=caption)
    elif mode == 'negative':
        prompt = self.ci_model.interrogate_negative(image)
    else:
        raise Exception(f"Unknown mode {mode}")
    return prompt

def image_to_prompt(self, image, mode, model_name):
    try:
        self._load_model(model_name)
        # image.convert('RGB')
        pil_image = transforms.ToPILImage()(image.squeeze().permute(2, 0, 1))  # TODO
        prompt = self._interrogate(pil_image, mode)
    except Exception as e:
        prompt = ""
        print(e)

    return prompt

ci = CI_Inference()

if name == 'main': print(ci.model_list)

wibur0620 commented 7 months ago

这是path/to/site-packages/open_clip/pretrained.py文件,我改了后辍名

pretrained.txt