alpertunga-bile / prompt-generator-comfyui

Custom AI prompt generator node for the ComfyUI
MIT License
72 stars 8 forks source link
comfyui comfyui-nodes prompt prompt-generation prompt-tool text-generation

prompt-generator-comfyui

Custom AI prompt generator node for ComfyUI. With this node, you can use text generation models to generate prompts. Before using, text generation model has to be trained with prompt dataset.

Table Of Contents

Setup

For Portable Installation of the ComfyUI

For Manual Installation of the ComfyUI

For ComfyUI Manager Users

Features

Example Workflow

example_hires_workflow

example_basic_workflow

Pretrained Prompt Models

Dataset

Models

Variables

Variable Names Definitions
model_name Folder name that contains the model
accelerate Open optimizations. Some of the models are not supported by BetterTransformer (Check your model). If it is not supported switch this option to disable or convert your model to ONNX
quantize Quantize the model. The quantize type is changed based on your OS and torch version. none value disables the quantization. Check this section for more information
prompt Input prompt for the generator
seed Seed value for the model
lock Lock the generation and select from the last generated prompts with index value
random_index Random index value in [1, 5]. If the value is enable, the index value is not used
index User specified index value for selecting prompt from the generated prompts. random_index variable must be disable
cfg CFG is enabled by setting guidance_scale > 1. Higher guidance scale encourages the model to generate samples that are more closely linked to the input prompt, usually at the expense of poorer quality
min_new_tokens The minimum numbers of tokens to generate, ignoring the number of tokens in the prompt.
max_new_tokens The maximum numbers of tokens to generate, ignoring the number of tokens in the prompt.
do_sample Whether or not to use sampling; use greedy decoding otherwise
early_stopping Controls the stopping condition for beam-based methods, like beam-search
num_beams Number of steps for each search path
num_beam_groups Number of groups to divide num_beams into in order to ensure diversity among different groups of beams
diversity_penalty This value is subtracted from a beam’s score if it generates a token same as any beam from other group at a particular time. Note that diversity_penalty is only effective if group beam search is enabled.
temperature How sensitive the algorithm is to selecting low probability options
top_k The number of highest probability vocabulary tokens to keep for top-k-filtering
top_p If set to float < 1, only the smallest set of most probable tokens with probabilities that add up to top_p or higher are kept for generation
repetition_penalty The parameter for repetition penalty. 1.0 means no penalty
no_repeat_ngram_size The size of an n-gram that cannot occur more than once. (0=infinity)
remove_invalid_values Whether to remove possible nan and inf outputs of the model to prevent the generation method to crash. Note that using remove_invalid_values can slow down generation.
self_recursive See this section
recursive_level See this section
preprocess_mode See this section

Quantization

Random Generation

Lock The Generation

How Recursive Works?

How Preprocess Mode Works?

Troubleshooting

Package Version

For Manual Installation of the ComfyUI

  1. Activate the virtual environment if there is one.
  2. Run the pip install --upgrade transformers optimum optimum[onnxruntime-gpu] command.

For Portable Installation of the ComfyUI

  1. Go to the ComfyUI_windows_portable folder.
  2. Open the command prompt in this folder.
  3. Run the .\python_embeded\python.exe -s -m pip install --upgrade transformers optimum optimum[onnxruntime-gpu] command.

Automatic Installation

For Manual Installation of the ComfyUI

For Portable Installation of the ComfyUI

New Updates On The Node

Contributing

Example Outputs

first_example second_example third_example