A prompt-generator or prompt-improvement node for ComfyUI, utilizing the power of a language model to turn a provided text-to-image prompt into a more detailed and improved prompt.
llm_gguf
in the ComfyUI/models
directory.Mistral-7B-Instruct-v0.3.Q4_K_M.gguf
(4.37 GB).
from the repository MaziyarPanahi/Mistral-7B-Instruct-v0.3-GGUF
on HuggingFace.
Mistral-7B-Instruct-v0.3.Q4_K_M.gguf
in the ComfyUI/models/llm_gguf
directory.gguf
file and only works with models that are
supported by llama-cpp-python
.(this was only tested this on Windows)
If you get error message about missing llama-cpp
, try these manual steps:
ComfyUI_windows_portable/python_embeded
.ComfyUI_windows_portable/python_embeded
directory.python -m pip install https://github.com/oobabooga/llama-cpp-python-cuBLAS-wheels/releases/download/cpu/llama_cpp_python-0.2.89+cpuavx2-cp311-cp311-win_amd64.whl
python -m pip install https://github.com/oobabooga/llama-cpp-python-cuBLAS-wheels/releases/download/textgen-webui/llama_cpp_python_cuda-0.2.89+cu121-cp311-cp311-win_amd64.whl
python -m pip install llama-cpp-python
. If the problem persist after these
steps, please report it in the Github issue tracker of this project.Configure the Searge_LLM_Node
with the necessary parameters within your ComfyUI project to utilize its capabilities
fully:
text
: The input text for the language model to process.model
: The directory name of the model within models/llm_gguf
you wish to use.max_tokens
: Maximum number of tokens for the generated text, adjustable according to your needs.apply_instructions
: instructions
: The instructions for the language model to generate a prompt. It supports the placeholder
{prompt}
to insert the prompt from the text
input.
Example: Generate a prompt from "{prompt}"
The Searge_AdvOptionsNode
offers a range of configurable parameters allowing for precise control over the text
generation process and model behavior.
The default values on this node are also the defaults that Searge_LLM_Node
uses when no Searge_AdvOptionsNode
is connected to it.
Below is a detailed overview of these parameters:
temperature
): Controls the randomness in the text generation process. Lower values make the model
more confident in its predictions, leading to less variability in output. Higher values increase diversity but can
also introduce more randomness. Default: 1.0
.top_p
): Also known as nucleus sampling, this parameter controls the cumulative probability distribution
cutoff. The model will only consider the top p% of tokens with the highest probabilities for sampling. Reducing this
value helps in controlling the generation quality by avoiding low-probability tokens. Default: 0.9
.top_k
): Limits the number of highest probability tokens considered for each step of the generation. A
value of 0
means no limit. This parameter can prevent the model from focusing too narrowly on the top choices,
promoting diversity in the generated text. Default: 50
.repetition_penalty
): Adjusts the likelihood of tokens that have already appeared in the
output, discouraging repetition. Values greater than 1
penalize tokens that have been used, making them less likely
to appear again. Default: 1.2
.These parameters provide granular control over the text generation capabilities of the Searge_LLM_Node
, allowing
users to fine-tune the behavior of the underlying models to best fit their application requirements.
The Searge_LLM_Node is released under the MIT License. Feel free to use and modify it for your personal or commercial projects.