Helix is a framework for building multi-model, feedback-looping AI systems. It's like a modular synthesizer for AI.
Read more about the concept in this blog post. In this analogy, if GPT
is a module making a single tone, Helix
is a rack full of modules feeding back into each other making a beautiful cacaphony.
You interact with Helix by using and writing Task Modules, which provide a single AI capability, and creating Graphs, which describe a network of those modules and their inputs and outputs.
Helix then loads the graph, runs all of the modules in their own seperate processes, handles communication between them, and provides a live web interface for interacting with them.
Though the project has lofty goals, Helix as a framework may be practical for all sorts of uses, such as:
Helix is written in Elixir and provides a web interface with Phoenix LiveView.
π¨π¨π¨ Warning! Helix, left unattended, may eat through OpenAI credits as fast as it can! π¨π¨π¨
These instructions assume you have Elixir installed.
First, clone this repository and cd
into it.
Then, install the dependencies:
mix deps.get
Copy the environment template file:
cp .env.tpl .env
Next, get your OpenAI API Keys and put them in .env
, as well as another configuration settings you want to put.
Run the application with source .env && mix phx.server
, or use the provided run.sh
script. The application will now be running at localhost:4000.
Once Helix is running, you can visit localhost:4000 to interact with it.
On the first screen, you can see all of your availble graphs:
Choose a graph from the dropdown to preview the rendered graph file. Press "Load Graph" to start the network.
On the next page, you can interact with your network (if it has LiveInput and LiveOutput modules in the graph.)
Notice that if your graph has multiple LiveInput targets, your can choose which to target using the dropdown. Each module in the graph will have it's own bubble color.
Graphs are described a in DOT format. A very simple GPT feedback graph could be defined like so:
digraph Daoism{
Ying [module=GPTModule, prompt="Breathe in."]
Yang [module=GPTModule, prompt="Breathe out."]
Ying -> Yang
Yang -> Ying
}
However, DOT is quite limited by itself, so Graph files are actually Liquid templates used to create a DOT file. This makes it much easier to use variable assigns and loops, like so:
{% assign ying_prompt="Your last thought was '{Yang}'. You breathe in and think: " %}
{% assign yang_prompt="Your last thought was '{Ying}'. You breathe out and think: " %}
digraph Daoism{
Ying [module=GPTModule, prompt="{{ying_prompt}}"]
Yang [module=GPTModule, prompt="{{yang_prompt}}"]
Ying -> Yang
Yang -> Ying
}
Place your graphs in ./priv/graphs
.
A simple syntax is provided for accessing historical inputs. If a module is receiving a signal from YourModule, you can reference it as {YourModule}
. To reference the previous signal received from that module, reference it as {YourModule.1}
, etc.
You can render the entire input/output history as {HISTORY}
, and your can reference the input which triggered the current node execution as {INPUT}
. This syntax is likely to expand and change.
GPTModule
GPTDecisionModule
BBTextModule
*
LiveInputModule
LiveOutputModule
ClockModule
AwaitModule
StartModule
PrintModule
PassthroughModule
Creating a module is very simple. All a module must do is implement handle_cast({:convey, event}, state)
to receive inputs from other modules, and at the end of that function call convey(output_value, state)
to pass a message along.
So, the simplest passthrough module will be:
defmodule Helix.Modules.PassthroughModule do
use Helix.Modules.Module
def handle_cast({:convey, event}, state) do
{:noreply, convey(event, state)}
end
end
ErrorModule
MixModule
, ClockModule
OutputModule
TextInputModule
ImageInputModule
, StableDiffusionModule
, HuggingFaceModule
ImageOutputModule
, WebSearchModule
, WebExtractTextModule
, UnixModule
, GenModuleModule
, AwaitModule
GPTDecisionModule
Bumblebee
modules.SaveFileModule
, LoadFileModule
Please, feel free to play around with Helix! I encourage you to share your feedback, ideas, and experiments. Please use GitHub issues for this.
If you'd like to make code contributions or submit graphs/modules, please send a pull request.
(c) Rich Jones, 2022+, AGPL.