This repo creates a series of nodes that enable you to utilize the Griptape Python Framework with ComfyUI, integrating LLMs (Large Language Models) and AI into your workflow.
Watch the trailer and all the instructional videos on our YouTube Playlist.
The repo currently has a subset of Griptape nodes, with more to come soon. Current nodes can:
Create Agents that can chat using these models:
Control agent behavior and personality with access to Rules and Rulesets.
Give Agents access to Tools:
Run specific Agent Tasks:
Generate Images using these models:
Audio
Use nodes to control every aspect of the Agents behavior, with the following drivers:
In this example, we're using three Image Description
nodes to describe the given images. Those descriptions are then Merged
into a single string which is used as inspiration for creating a new image using the Create Image from Text
node, driven by an OpenAI Driver
.
The following image is a workflow you can drag into your ComfyUI Workspace, demonstrating all the options for configuring an Agent.
You can previous and download more examples here.
WebScraperTool
provides better results when using off_prompt
.:
. Example: "What is https://griptape.ai" was being converted to "What is https:". This is due to the dynamicprompt
functionality of ComfyUI, so I've disabled that.Added context string to all BOOLEAN parameters to give the user a better idea as to what the particular boolean option does. For example, intead of just True
or False
, the tools now explain off_prompt
.
Major reworking of how API keys are set. Now you can use the ComfyUI Settings window and add your API keys there. This should simplify things quite a bit as you no longer need to create a .env
file in your ComfyUI folder.
Breaking Changes
AnthropicDriversConfig
node no longer includes Embedding Driver. If you wish to use Claude within a RAG pipeline, build a Config: Custom Structure
using a Prompt Driver, Embedding Driver, and Vector Store Driver. See the attached image for an example:
claude-3-5-sonnet-20241022
ignore_voyage_embedding_driver
to True
TavilyWebSearchDriver
. Requires a Tavily api key.ExaWebSearchDriver
. Requires an Exa api key. Griptape Agent Config: LM Studio Drivers
. The base_url
parameter wasn't being set properly causing a connection error. Griptape Run: Tool Task
node. It now properly handles the output of the tool being a list.top_p
and top_k
to Anthropic and Google Prompt DriversNew Nodes Griptape now has the ability to generate new models for Ollama
by creating a Modelfile. This is an interesting technique that allows you to create new models on the fly.
Griptape Util: Create Agent Modelfile
. Given an agent with rules and some conversation as an example, create a new Ollama Modelfile with a SYSTEM prompt (Rules), and MESSAGES (Conversation).Griptape Util: Create Model from Modelfile
. Given a Modelfile, create a new Ollama model.Griptape Util: Remove Ollama Model
. Given an Ollama model name, remove the model from Ollama. This will help you cleanup unnecessary models. Be Careful with this one, as there is no confirmation step!MAJOR UPDATE
Update to Griptape Framework to v0.31.0
There are some New Configuration Drivers nodes! These new nodes replace the previous Griptape Agent Config
nodes (which still exist, but have been deprecated). They display the various drivers that are available for each general config, and allow you to make changes per driver. See the image for examples:
Griptape Agent Config
nodes still exist, but have been deprecated. They will be removed in a future release. Old workflows should automatically display the older nodes as deprecated. It's highly recommended to replace these old nodes with the new ones. I have tried to minimize breaking nodes, but if some may exist. I appologize for this if it happens.New Nodes
Griptape Agent Config: Cohere Drivers
: A New Cohere node.Griptape Agent Config: Expand
: A node that lets you expand Config Drivers nodes to get to their individual drivers.Griptape RAG Nodes
a whole new host of nodes related to Retrieval Augmented Generation (RAG). I've included a sample in the examples folder that shows how to use these nodes. The new nodes include:
Griptape RAG: Tool
- A node that lets you create a tool for RAG.Griptape RAG: Engine
- A node that lets you create an engine for RAG containing multiple stages. Learn more here: https://docs.griptape.ai/stable/griptape-framework/engines/rag-engines/:
Griptape Combine: RAG Module List
- A node that lets you combine modules for a stage.Griptape RAG Query: Translate Module
- A module that translates the user's query into another language.Griptape RAG Retrieve: Text Loader Module
- A module that lets you load text and vectorize it in real time.Griptape RAG Retrieve: Vector Store Module
- A module that lets you load text from an existing Vector Store.Griptape RAG Rerank: Text Chunks Module
- A module that re-ranks the text chunks from the retrieval stage.Griptape RAG Response: Prompt Module
- Uses an LLM Prompt Driver to generate a response.Griptape RAG Response: Text Chunks Module
- Just responds with Text Chunks.Griptape RAG Response: Footnote Prompt Module
- A Module that ensures proper footnotes are included in the response.max_tokens
to most configuration and prompt_driver nodes. This gives you the ability to control how many tokens come back from the LLM. _Note: It's a known issue that AmazonBedrock doesn't work with maxtokens at the moment.Griptape Tool: Extraction
node that lets you extract either json or csv text with either a json schema or column header definitions. This works well with TaskMemory.Griptape Tool: Prompt Summary
node that will summarize text. This works well with TaskMemory.Griptape Tool: Query
node to allow Task Memory to go "Off Prompt"string
for Griptape Display: Data as Text
node.Griptape Config: Environment Variables
node to allow you to add environment variables to the graphGriptape Text: Load
node to load a text file from diskGRIPTAPE_CLOUD_API_KEY
that you can get from your Griptape Cloud API Page.Added default colors to help differentiate between types of nodes. Tried to keep it minimal and distinct.
Agent support nodes (Rules, Tools, Drivers, Configurations): Blue
Rationale: Blue represents stability and foundational elements. Using it for all agent-supporting nodes shows their interconnected nature.
Agents: Purple
Rationale: Purple often represents special or unique elements. This makes Agents stand out as the central, distinct entities in the system.
Tasks: Red
Rationale: Red signifies important actions, fitting for task execution nodes.
Output nodes: Black
Rationale: Black provides strong contrast, suitable for final output display.
Utility nodes (Merge, Conversion, Text creation, Loaders): No color (gray
)
Rationale: Keeping utility functions in a neutral color helps reduce visual clutter and emphasizes their supporting role.
New Node SaveText. This is a simple SaveText node as requested by a user. Please check it out and give feedback.
New Nodes A massive amount of new nodes, allowing for ultimate configuration of an Agent.
Griptape Agent Configuration
Griptape Agent: Generic Structure - A Generic configuration node that lets you pick any combination of prompt_driver
, image_generation_driver
, embedding_driver
, vector_store_driver
, text_to_speech_driver
, and audio_transcription_driver
.
Griptape Replace: Rulesets on Agent - Gives you the ability to replace or remove rulesets from an Agent.
Griptape Replace: Tools on Agent - Gives you the ability to replace or remove tools from an Agent
Drivers
Prompt Drivers - Unique chat prompt drivers for AmazonBedrock
, Cohere
, HuggingFace
, Google
, Ollama
, LMStudio
, Azure OpenAi
, OpenAi
, OpenAiCompatible
Image Generation Drivers - These all existed before, but adding here for visibility: Amazon Bedrock Stable Diffusion
, Amazon Bedrock Titan
, Leonardo AI
, Azure OpenAi
, OpenAi
Embedding Drivers - Agents can use these for generating embeddings, allowing them to extract relevant chunks of data from text. Azure OpenAi
, Voyage Ai
, Cohere
, Google
, OpenAi
, OpenAi compatable
Vector Store Drivers - Allows agents to access Vector Stores to query data: `Azure MongoDB
, PGVector
, Pinecone
, Amazon OpenSearch
, Qdrant
, MongoDB Atlas
, Redis
, Local Vector Store
Text To Speech Drivers - Gives agents the ability to convert text to speech. OpenAi
, ElevenLabs
Audio Transcription Driver - Gives agents the ability to transcribe audio. OpenAi
re-fixed spelling of Compatable to Compatible, because it's a common mistake. :)
Vector Store - New Vector Store nodes - Vector Store Add Text
, Vector Store Query
, and Griptape Tool: VectorStore
to allow you to work with various Vector Stores
Environment Variables parameters - all nodes that require environmetn variables & api keys have those environment variables specified on the nodes. This should make it easier to know what environment variables you want to set in .env
.
Examples - Example workflows are now available in the /examples
folder here.
Breaking Change
ImageQueryDriver
, so the image_query_model
input has been removed from the configuration nodes. griptape_config.json
. Now all keys are set in .env
.OPENAI_API_KEY
was set.max_attemnpts_on_fail
parameter to all Config nodes to allow the user to determine the number of retries they want when an agent fails. This maps to the max_attempts
parameter in the Griptape Framework.AZURE_OPENAI_ENDPOINT
and AZURE_OPENAI_API_KEY
. You will also require a deployment name. This is available in Azure OpenAI StudioCreate Agent
and Run Agent
nodes to no longer cache their knowledge between runs. Now if the agent
input isn't connected to anything, it will create a new agent on each run.Install ComfyUI using the instructions for your particular operating system.
If you'd like to run with a local LLM, you can use Ollama and install a model like llama3.
Download and install Ollama from their website: https://ollama.com
Download a model by running ollama run <model>
. For example:
ollama run llama3
You now have ollama available to you. To use it, follow the instructions in this YouTube video: https://youtu.be/jIq_TL5xmX0?si=0i-myC6tAqG8qbxR
There are two methods for installing the Griptape-ComfyUI repository. You can either download or git clone this repository inside the ComfyUI/custom_nodes
, or use the ComfyUI Manager.
Option A - ComfyUI Manager (Recommended)
Option B - Git Clone
Open a terminal and input the following commands:
cd /path/to/comfyUI
cd custom_nodes
git clone https://github.com/griptape-ai/ComfyUI-Griptape
Libraries should be installed automatically, but if you're having trouble, hopefully this can help.
There are certain libraries required for Griptape nodes that are called out in the requirements.txt file.
griptape[all]
python-dotenv
These should get installed automatically if you used the ComfyUI Manager installation method. However, if you're running into issues, please install them yourself either using pip
or poetry
, depending on your installation method.
Option A - pip
pip install "griptape[all]" python-dotenv
Option B - poetry
poetry add "griptape[all]" python-dotenv
Now if you restart comfyUI, you should see the Griptape menu when you click with the Right Mouse button.
If you don't see the menu, please come to our Discord and let us know what kind of errors you're getting - we would like to resolve them as soon as possible!
For advanced features, it's recommended to use a more powerful model. These are available from the providers listed bellow, and will require API keys.
Settings
button in the ComfyUI Sidebar.Griptape
option.Scroll down to the API key you'd like to set and enter it.
Note: If you already have a particular API key set in your environment, it will automatically show up here.
You can get the appropriate API keys from these respective sites:
Griptape does install the torch
requirement. Sometimes this may cause problems with ComfyUI where it grabs the wrong version of torch
, especially if you're on Nvidia. As per the ComfyUI docs, you may need to unintall and re-install torch
.
pip uninstall torch
pip install torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/cu121
Sometimes you'll find that the Griptape library didn't get updated properly. This seems to be especially happening when using the ComfyUI Manager. You might see an error like:
ImportError: cannot import name 'OllamaPromptDriver' from 'griptape.drivers' (C:\Users\evkou\Documents\Sci_Arc\Sci_Arc_Studio\ComfyUi\ComfyUI_windows_portable\python_embeded\Lib\site-packages\griptape\drivers\__init__.py)
To resolve this, you must make sure Griptape is running with the appropriate version. Things to try:
python -m pip install griptape -U
If you are using StabilityMatrix to run ComfyUI, you may find that after you install Griptape you get an error like the following:
To resolve this, you'll need to update your torch installation. Follow these steps:
...
menu.torch
to filter the list.-
button to uninstall torch
.+
button to install a new package.torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/cu121
Massive thank you for help and inspiration from the following people and repos!