receyuki / comfyui-prompt-reader-node

The ultimate solution for managing image metadata and multi-tool compatibility. ComfyUI node version of the SD Prompt Reader
MIT License
289 stars 22 forks source link

[BUG] - Image not analyzed after load #7

Closed isben closed 1 year ago

isben commented 1 year ago

Description

I updated all custom nodes and installed SD-Prompt Reader Node and then restarted ComfyUI. I cleared the workspace and added the SD Prompt Reader Node and then selected a jpeg file dowloaded from Civitai. Loading that file in the stand alone app or on the ComfyUI workspace shows the prompts and metadata of the image. However, uploading it into the SD Prompt Reader Node, displays the image at the bottom of the node, but doesn't fill the prompt boxes nor does it activate any of the outputs.

Reproduction steps

Image file

workflow

Version

1.0.0

receyuki commented 1 year ago

Please connect this node to any other node (either directly connect the IMAGE to any Saver or connect any parameter to your workflow, as long as you can make this node run). The content on the reader ui will only be updated after one generation, I know this is confusing but there is currently no better solution.

isben commented 1 year ago

@receyuki We have some progress after I followed your instructions above. I figured out that I was making use of the ClipTextEncode++ nodes provided by the smzNodes package Obviously, the prompts were missing because you don't detect them in these nodes. The stand-alone version has the same issue. It would be nice if you could include them in your detection list. I use them mainly to make sure prompts taken from images generated with A1111 would behave the same on ComfyUI.

However, I'm probably missing something, but even after running the worflow and seeing both the Prompt Reader and Prompt Saver nodes turn green, I don't see any of the outputs active beside the Image and Mask. Yes, the prompt boxes are now filled in, but I can't connect them to a ClipTextEncode node. The seed isn't accessible, nor the model or model name, and so on.

Could you please explain clearly how we are supposed to access and expose the parameters extracted from the image metadata? workflow

receyuki commented 1 year ago

ClipTextEncode++ seems very similar to the original ClipTextEncode, so supporting this node should not be difficult, I’m may add support for it. However, in fact, if you use the Prompt Saver node correctly, all parameters will be readable by the Reader, regardless of what custom nodes are used.

About how to use the Prompt Saver, you can refer to the connection of the example workflow. Put simply, you need to convert the Saver and KSampler parameters from widgets to input (right-click the node and select options like “convert steps to input”). Then, set the parameters on the Generator and output the parameters to both KSampler and the Saver at the same time (believe me, it might be a bit troublesome for the first time, but personally, I think it’s more convenient than the traditional method after the connection is completed). image

isben commented 1 year ago

Thanks for the follow up.

We have some progress, but we are still far from fully clarifying the process.

Here is the workflow I came up with. Feel free to point out what's wrong with it.

workflow2.json

receyuki commented 1 year ago

Here ↓

截屏2023-11-01 上午1 03 14
  1. This is a known ComfyUI bug which happens when your model list changes (either models are deleted or added). The simplest solution is to reload the page (and you may also want to restart the ComfyUI server first).
  2. The file name of the model is not unique, only the hash of the model can ensure the uniqueness of the model. However, in most cases, the metadata will not contain the model’s hash, so I hope users will take the model file name displayed in the Reader as a reference and choose the model themselves in the generator.
  3. The metadata of most images does not contain information about clip and vae.
  4. Different tool has different support and naming for different samplers and schedulers, so it’s possible that the tool you’re using doesn't support the sampler and scheduler used for image generation. Even when the sampler used to generate the image is supported, there might be inconsistency in naming. Due to the high update frequency of the sampler and scheduler, adding a naming map between different tools would significantly increase my workload.
  5. Since it's not possible to directly extract metadata from KSampler, it is necessary to use the Parameter Generator Node to generate parameters and simultaneously output them to both the Prompt Saver Node and KSampler.

  6. It’s just left for those who need it. For instance, some people choose to output them to the terminal through other nodes. If you don’t need them, you can ignore them.
isben commented 1 year ago

Thanks for your explanations. It makes a lot more sense now.

homeworkace commented 9 months ago

Bumping because the screenshot in this comment seems to be the closest thing to what I'm facing now. image

Like OP my aim is to replicate settings from images done with the AUTOMATIC1111 client. The text fields seem to show that settings metadata exists in the source image and that the node is able to parse them, but all the output parameters except "IMAGE" and "MASK" are greyed out, which leaves me unable to pass parameters like the prompt to a KSampler. Maybe I'm just new to ComfyUI as a whole, but on the off-chance I encounter a bug I suppose it's worth asking.

receyuki commented 9 months ago

I've never encountered such a problem before. If you're unable to connect nodes, my suggestion would be to restart the server/create a new workflow/re-add nodes, because this problem seems to be unrelated to the reader.

If you're a newbie to comfyui, my example workflows may be helpful for you. https://github.com/receyuki/comfyui-prompt-reader-node#example-workflow