ltdrdata / ComfyUI-Inspire-Pack

This repository offers various extension nodes for ComfyUI. Nodes here have different characteristics compared to those in the ComfyUI Impact Pack. The Impact Pack has become too large now...
GNU General Public License v3.0
433 stars 50 forks source link

[request] how many cache backend data nodes can we support? #186

Closed t00350320 closed 1 week ago

t00350320 commented 1 week ago

an init workflow has many nodes(28 nodes) to be cached, but always currupted before cached over. after remove some nodes, the workflow can be loaded completely. So, i wonder do we have a proper nodes number limit。

lora key not loaded: lora_unet_down_blocks_0_attentions_0_proj_in.alpha
lora key not loaded: lora_unet_down_blocks_0_attentions_0_proj_in.lora_down.weight
lora key not loaded: lora_unet_down_blocks_0_attentions_0_proj_in.lora_up.weight
lora key not loaded: lora_unet_down_blocks_0_attentions_0_proj_out.alpha

thanks a lot

ltdrdata commented 1 week ago

For the LoRA key issue, please inquire with the ComfyUI core repository along with the LoRA file being used.

t00350320 commented 1 week ago

For the LoRA key issue, please inquire with the ComfyUI core repository along with the LoRA file being used.

Sorry I didn't explain clearly. If i remove some nodes(not include this lora model), the workflow can be loaded correctly,the lora also can be loaded correctly. I mean, abnormal results are not the same every time if i load the full workflow init_workflow.json

So i asked you do we have some limition for cache nodes usage, or it may have some relation with ComfyUI itself, but i have no idea how to dig it out

ltdrdata commented 1 week ago

Sorry I didn't explain clearly. If i remove some nodes(not include this lora model), the workflow can be loaded correctly,the lora also can be loaded correctly. I mean, abnormal results are not the same every time if i load the full workflow init_workflow.json

So i asked you do we have some limition for cache nodes usage, or it may have some relation with ComfyUI itself, but i have no idea how to dig it out

When not using backend cache, the cache is dependent on the workflow. The purpose of using backend cache is to maintain information independently from the workflow's cache.

In other words, it means that even when loading a new workflow, the previously executed cache is maintained as is.

t00350320 commented 1 week ago

Sorry I didn't explain clearly. If i remove some nodes(not include this lora model), the workflow can be loaded correctly,the lora also can be loaded correctly. I mean, abnormal results are not the same every time if i load the full workflow init_workflow.json So i asked you do we have some limition for cache nodes usage, or it may have some relation with ComfyUI itself, but i have no idea how to dig it out

When not using backend cache, the cache is dependent on the workflow. The purpose of using backend cache is to maintain information independently from the workflow's cache.

In other words, it means that even when loading a new workflow, the previously executed cache is maintained as is.

yes, definitely right, So step1, l loaded all ckpt,loras,controlnet,upscale, etc in initworkflow.json as i posted before. Then step2, real service workflows like service1.json , service2.json ,etc loaded . The current issue is, the workflow can't be loaded exactly in step1 , --

ltdrdata commented 1 week ago

Have you investigated the cached items through this node? image

t00350320 commented 1 week ago

Have you investigated the cached items through this node? image

i think it miss some nodes,like "birefnet" node

---- [String Key Caches] ----
cn15 tile: N/A(tag)
sdxl cn canny: N/A(tag)
sdxl base1.0 clip: N/A(tag)
sdxl base1.0 vae: N/A(tag)
sdxl cn tile: N/A(tag)
sdxl latteart vae: N/A(tag)
sdxl lineart: N/A(tag)
upscale2x: N/A(tag)
cn15 lineart: N/A(tag)
cn1.5 depth: N/A(tag)
sd1.5 clip: N/A(tag)
sd1.5 vae: N/A(tag)
sd15 vae-ft-mse-840000: N/A(tag)
midas loader: N/A(tag)
sdxl latteart: N/A(tag)
sdxl latteart clip: N/A(tag)
sd1.5 lora gold: N/A(tag)
sd1.5 lora sand: N/A(tag)
sd1.5 ertonghuiben: N/A(tag)
sd1.5 ertonghuiben clip: N/A(tag)
stable-diffusion-xl-base-1.0/sd_xl_base_1.0.safetensors: ckpt
autismmixSDXL_autismmixConfetti.safetensors: ckpt
realisticVisionV51_v51VAE.safetensors: ckpt
allInOnePixelModel_v1_sd15.ckpt: ckpt
sd1_5/AWPainting_v1.4.safetensors: ckpt
sdxl sumiaolora: lora
pixel sd15: lora
pixel sd15 clip: lora

---- [Number Key Caches] ----

---- [TagCache Settings] ----
ckpt: 5
latent: 100
image: 100
: 20
lora: 20
ltdrdata commented 1 week ago

Have you investigated the cached items through this node? image

i think it miss some nodes,like "birefnet" node

---- [String Key Caches] ----
cn15 tile: N/A(tag)
sdxl cn canny: N/A(tag)
sdxl base1.0 clip: N/A(tag)
sdxl base1.0 vae: N/A(tag)
sdxl cn tile: N/A(tag)
sdxl latteart vae: N/A(tag)
sdxl lineart: N/A(tag)
upscale2x: N/A(tag)
cn15 lineart: N/A(tag)
cn1.5 depth: N/A(tag)
sd1.5 clip: N/A(tag)
sd1.5 vae: N/A(tag)
sd15 vae-ft-mse-840000: N/A(tag)
midas loader: N/A(tag)
sdxl latteart: N/A(tag)
sdxl latteart clip: N/A(tag)
sd1.5 lora gold: N/A(tag)
sd1.5 lora sand: N/A(tag)
sd1.5 ertonghuiben: N/A(tag)
sd1.5 ertonghuiben clip: N/A(tag)
stable-diffusion-xl-base-1.0/sd_xl_base_1.0.safetensors: ckpt
autismmixSDXL_autismmixConfetti.safetensors: ckpt
realisticVisionV51_v51VAE.safetensors: ckpt
allInOnePixelModel_v1_sd15.ckpt: ckpt
sd1_5/AWPainting_v1.4.safetensors: ckpt
sdxl sumiaolora: lora
pixel sd15: lora
pixel sd15 clip: lora

---- [Number Key Caches] ----

---- [TagCache Settings] ----
ckpt: 5
latent: 100
image: 100
: 20
lora: 20

It seems your BiRefNet Model Loader in the workflow is not working. Try testing it by attaching nodes that use it directly, rather than using the backend cache node.

t00350320 commented 1 week ago

Have you investigated the cached items through this node? image

i think it miss some nodes,like "birefnet" node

---- [String Key Caches] ----
cn15 tile: N/A(tag)
sdxl cn canny: N/A(tag)
sdxl base1.0 clip: N/A(tag)
sdxl base1.0 vae: N/A(tag)
sdxl cn tile: N/A(tag)
sdxl latteart vae: N/A(tag)
sdxl lineart: N/A(tag)
upscale2x: N/A(tag)
cn15 lineart: N/A(tag)
cn1.5 depth: N/A(tag)
sd1.5 clip: N/A(tag)
sd1.5 vae: N/A(tag)
sd15 vae-ft-mse-840000: N/A(tag)
midas loader: N/A(tag)
sdxl latteart: N/A(tag)
sdxl latteart clip: N/A(tag)
sd1.5 lora gold: N/A(tag)
sd1.5 lora sand: N/A(tag)
sd1.5 ertonghuiben: N/A(tag)
sd1.5 ertonghuiben clip: N/A(tag)
stable-diffusion-xl-base-1.0/sd_xl_base_1.0.safetensors: ckpt
autismmixSDXL_autismmixConfetti.safetensors: ckpt
realisticVisionV51_v51VAE.safetensors: ckpt
allInOnePixelModel_v1_sd15.ckpt: ckpt
sd1_5/AWPainting_v1.4.safetensors: ckpt
sdxl sumiaolora: lora
pixel sd15: lora
pixel sd15 clip: lora

---- [Number Key Caches] ----

---- [TagCache Settings] ----
ckpt: 5
latent: 100
image: 100
: 20
lora: 20

It seems your BiRefNet Model Loader in the workflow is not working. Try testing it by attaching nodes that use it directly, rather than using the backend cache node.

haha, amazing thing is, if i delete all nodes exclude birefnet, then it can be loaded

---- [String Key Caches] ----
birefnet: N/A(tag)

---- [Number Key Caches] ----

---- [TagCache Settings] ----
ckpt: 5
latent: 100
image: 100
: 20

so, the god may know what happend?

t00350320 commented 1 week ago

TagCache Settings will limit the nodes number? or should i set some tag for every node? have no idea

ltdrdata commented 5 days ago

birefnet

Oh, that's right. If you don't assign a tag, it will be treated as "N/A," which will count towards the 20 tag limit. You can resolve this by either increasing the configuration for empty tags in "Show Cached Info" or assigning a separate tag to the cache in birefnet.

t00350320 commented 5 days ago

birefnet

Oh, that's right. If you don't assign a tag, it will be treated as "N/A," which will count towards the 20 tag limit. You can resolve this by either increasing the configuration for empty tags in "Show Cached Info" or assigning a separate tag to the cache in birefnet.

it finally works. ~-~ Thank you very much.