Archyve is a web app that makes pretrained LLMs aware of a user's documents, while keeping those documents on the user's own devices and infrastructure.
Features go through these phases:
RAG | KnowledgeGraph | OpenAI Support | Ollama/OpenAI Proxy Port |
---|---|---|---|
โ Stable | ๐งช Experimental | โ Stable | ๐ง In Development |
Archyve enables Retrieval-Augmented Generation (RAG) by providing an API to query the user's docs for relevant context. The Archyve client provides Archyve with the prompt the user entered, and Archyve will return relevant text chunks that the client can include in its prompt to an LLM server.
Archyve has an experimental Knowledge Graph feature (a.k.a "Graph RAG").
Archyve provides:
For LLM servers, Archyve supports:
The Getting Started section will walk you through setting up Archyve for use with Ollama.
admin
attribute set to true
; you can break things in there, but right now that is the way to configure settings that control Archyve's behaviourTo run Archyve, use docker compose
.
cp dotenv_template local.env
openssl rand -hex 64
and put the value in the SECRET_KEY_BASE
variable in your local.env
filedocker compose up --build
If you see "โ archyve-worker Error", don't worry about it. Docker will build the image and run it.
If you see an error like "failed to solve: error from sender: open /home/.../archyve/deps/postgres: permission denied", it's because Docker has a bug on linux where it will still try to traverse directories under the build dir even if they're in the
.dockerignore
file.Work around this with
mv deps ../archyve-deps
. You'll need tomv ../archyve-deps deps
when you want to run the app on your host again.
admin@archyve.io
/ password
(you can change these values by setting USERNAME
and PASSWORD
in your local.env
file and restarting the container)WARNING: The container will write a file with local encryption keys into
config/local
. If you lose this file, the application will not be able to decrypt sensitive data within the database (e.g. passwords or API keys), and the database will need to be reset, losing all data.If you want to migrate your database elsewhere, migrate this file along with it.
This section is about running Archyve directly on your host machine.
If you want to develop Archyve you probably want the app runing directly on your host, rather than in a container, to reduce the time it takes to try your changes.
Ensure that you have brew installed
Ensure that you have docker
set up and a "machine" configured and ready to pull and run container images
Ensure you have ops installed:
brew install nickthecook/crops/ops
ollama serve
and that you have the minimum models installed (see section on Ollama further below).compose
plugins installedollama serve
and that you have the minimum models installed (see section on Ollama further below).cp config/dev/config.sample.json config/dev/config.json
config/dev/config.json
, running the given commands and replacing them in the file with their outputconfig/dev/secrets.ejson
for any real environmentembedding: true
and one without embedding
set (or embedding: false
)ops up
ops rails db:setup
(after the initial setup)ops rails neo4j:migrate
ops server
http://127.0.0.1:3300/
in a browser and you can login using admin@archyve.io
and password
.
ops
loads environment variables fromconfig/dev/config.json
andconfig/dev/secrets.ejson
. It loads actions and dependencies fromops.yml
in your working directory. It will save you time.You may want to
alias rails="ops rails"
if you're a Rails dev, and therails ...
muscle memory is hard to change.
There are more details on how to deploy Archyve to production.
Archyve provides a web interface with a few different areas, focused on different things:
admin
attribute set to true
, and allows admins to manage all aspects of ArchyveYou can break things in the admin UI. When starting out, it's best just to use it to configure ModelServers, ModelConfigs, and Settings.
Documents uploaded to any Collection go through a few stages:
If a stage takes longer than a second, the progress is shown in the UI. Once a document is in the Embedded
state, you can test searching it to see what chunks would be returned for a given chat prompt.
When the Collection has the Knowledge Graph feature enabled, Documents also go through:
Then the Collection will go through a few stages:
Once you have a document in the Extracted
state, you can view the most commonly references entities that were extracted from Documents in a Collection. Click the "Top 10 entities by occurrences" tab on the Collection page to see the 10 entities in that Collection that appeared more often in Documents than any others.
Relationships are not yet visible in the Archyve UI, although you are presented with a count of the number of relationships each entity has.
Once you have a Collection in the Graphed
state, you can view the knowledge graph in the Neo4j web interface:
You can query the graph using Cypher queries. E.g., for a Collection called "Greek Mythology", use this query to show the complete graph, with all relationships:
match (n:
Nodes::Entity{collection_name: "Greek Mythology"}) return (n)
Archyve doesn't do anything with data once it's in Neo4j, but it will, and the Neo4j interface can be useful in assessing the quality of entity extraction.
See archyve.io for details on the ReST API.
Archyve provides a ReST API. To use it, you must have:
X-Client-Id
header in all API requests)Authorization
header after Bearer
)TODO: add this to the UI
If you are running the app on your host, you can set the DEFAULT_API_KEY
and DEFAULT_CLIENT_ID
environment variables. On startup, Archyve will ensure that a client with these credentials exists in its database.
DEFAULT_API_KEY
must be a 48-byte value encoded in base64. Generate a key with openssl rand -base64 48
.DEFAULT_CLIENT_ID
can be any string, but it should be unique to your app. A UUID is recommended.If you are running the app via
docker compose
, set the above two environment variables in yourlocal.env
file and restart the containers.If you are running the app on your host, set the two above environment variables in
config/dev/config.json
and runrails db:seed
.
You should be able to send API requests like this:
curl -v localhost:3300/v1/collections \
-H "Accept: application/json" \
-H "Authorization: Bearer <YOUR_API_KEY>" \
-H "X-Client-Id: <YOUR_CLIENT_ID>"
See archyve.io for more information on the API.
See the next section for setting up Ollama for use by Archyve or document uploads and chat will fail.
If you are running Archyve directly on your host, you will have ops
set up and you can use ops request
to send requests to the Archyve API. E.g.:
$ ops request search `q=hello`
$ ops request collections/1/entities/1
$ ops request version
It will handle setting the authorization header for you.
You can run a dedicated instance of Ollama in a container by adding it to the
compose.yaml
file, but it takes a while to pull a chat model, so the default here is to assume you already have an Ollama instance.
Archyve will use a local instance of Ollama by default. Ensure you have Ollama installed and running (with ollama serve
) and then run the following commands to set up your Ollama instance for Archyve:
ollama pull nomic-embed-text
ollama pull llama3.1:8b
ollama pull phi3:latest
You can change which models Archyve will use in the Admin UI (/admin
) under Settings in the menu on the left.
Keep in mind that whatever models you have Archyve use, you will need to pull those models in Ollama yourself (for now).
Archyve uses the nomic-embed-text
model for all embeddings. Please make sure it is present in your Ollama server.
Changing the embedding model means you will need to delete all your Collections and re-ingest them, as all the embeddings will have changed and similarity search will not function. Archyve has not been testing with embedding models other than
nomic-embed-text
.
The summarization model is the model Archyve will use to summarize chats so it can set a brief title for each one, based on the first message in the chat.
phi3
is fast and usually decent at this, but you can change this model in the Admin UI (/admin
) under Settings -> "summarization_model".
If you enable the Knowledge Graph for any Collections, Archyve will default to using llama3.1:8b
as the entity extraction model. If you change this, you may get poor results from the KG. However, I'd be very interested to hear your experience if you do try another model!
Archyve works with OpenAI, but you need to provision one ModelServer and one ModelConfig first.
Go to "Admin" using the button in the bottom left, then click "ModelServers". Click the "+" in the top right of the ModelServer list and enter this info:
OpenAI
(or whatever you want)https://api.openai.com/v1
openai
sk-<the rest of your OpenAI API key>
Click "Save".
From the Admin UI, click ModelConfigs on the left. Click the "+" in the top right of ModelConfig list and enter this info:
gpt-4o
(or any valid OpenAI model you can chat with - see here for a list)Leave "Embedding" unchecked - this is not an embedding model. Click "Save".
You can now select this Model from the drop-down in a Conversation. To make this your default generation model, go to the ModelConfigs list in the Admin UI, click the three dots menu on the right side of your model config, then click "Use for chat".
These are not supported at the moment. Archyve needs to support the OpenAI embedding response format, and it needs to make it easy for users to use multiple embedding models at the same time, and make it clear what the limitations are of doing that.
Archyve works with OpenAI, but you need to provision one ModelServer and one ModelConfig first.
Go to "Admin" using the button in the bottom left, then click "ModelServers". Click the "+" in the top right of the ModelServer list and enter this info:
Azure OpenAI
(or whatever you want)openai_azure
Click "Save".
From the Admin UI, click ModelConfigs on the left. Click the "+" in the top right of ModelConfig list and enter this info:
gpt-4o
(or any valid OpenAI model you can chat with - see here for a list)Leave "Embedding" unchecked - this is not an embedding model. Click "Save".
You can now select this Model from the drop-down in a Conversation. To make this your default generation model, go to the ModelConfigs list in the Admin UI, click the three dots menu on the right side of your model config, then click "Use for chat".
These are not supported at the moment. Archyve needs to support the OpenAI embedding response format, and it needs to make it easy for users to use multiple embedding models at the same time, and make it clear what the limitations are of doing that.
There is an admin UI running at http://127.0.0.1:3300/admin. There, you can view and change Settings, ModelConfigs, and ModelServers if you are logged in as an admin.
There is a link to it in the bottom of the side bar if you are logged into Archyve as an admin.
Archyve uses a jobs framework called Sidekiq. It has a web UI that you can access at http://127.0.0.1/sidekiq if you are logged in as an admin.
Archyve uses the term "Knowledge Graph" instead of Graph RAG because "RAG" is ambiguous. Everything is "retrieved" from somwhere.
Archyve has an experimental Knowledge Graph (KG) feature (a.k.a. "Graph RAG"). The user enables the feature per-Collection. It is safe to use, but will generate orders of magnitude more calls to your LLM server than not using it.
If you enable KG in any Collections, it is highly recommended that you use
llama3.1
as yourentity_extraction_model
. Many other models are simply not able to extract entities in a meaningful way with the prompts used by Archyve.
This feature is a work-in-progress, but it seems effective at providing relevant context at the moment. It will extract entities and the relationships between those entities from the text of your documents. It will then create a summary of each entity and store that in the vector database. These entity summaries will be returned by Archyve API search
along with relevant chunks of your document.
If a Collection has KG enabled, chatting with Archyve about that Collection will use the KG to augment your prompts.
Archyve KG is based on Microsoft's Graph RAG project. Archyve is as far as extracting entities and relationships from uploaded documents, but does not perform community detection or entity deduplication yet.