Create a production-ready MVP for securely chatting with your documents.
This entire workshop was recorded as a YouTube video. Feel free to watch it here:
https://www.youtube.com/watch?v=ibzlEQmgPPY
Thanks for joining! Let's dive in.
Clone repo: Clone this repo at tag step-1
:
git clone -b step-1 https://github.com/supabase-community/chatgpt-your-files.git
This will automatically clone at step 1, our starting point.
Git checkpoints: The workshop is broken down into steps (git tags). There's a step for every major feature we are building.
Feel free to follow along live with the presenter. When it's time to jump to the next step, run:
git stash push -u # stash your working directory
git checkout step-X # jump to a checkpoint (replace X wit step #)
Step-by-step guide: These steps are written out line-by-line. Feel free to follow along using the steps below.
This repository includes 3 sample markdown files that we'll use to test the app:
./sample-files/roman-empire-1.md
./sample-files/roman-empire-2.md
./sample-files/roman-empire-3.md
Jump to a step:
Step 1
- StorageUse this command to jump to the step-1
checkpoint.
git checkout step-1
We'll start by handling file uploads. Supabase has a built-in object storage (backed by S3 under the hood) that integrates directly with your Postgres database.
First install NPM dependencies.
npm i
When developing a project in Supabase, you can choose to develop locally or directly on the cloud.
Start a local version of Supabase (runs in Docker).
npx supabase start
Store the Supabase URL & public anon key in .env.local
for Next.js.
npx supabase status -o env \
--override-name api.url=NEXT_PUBLIC_SUPABASE_URL \
--override-name auth.anon_key=NEXT_PUBLIC_SUPABASE_ANON_KEY |
grep NEXT_PUBLIC > .env.local
Create a Supabase project at https://database.new, or via the CLI:
npx supabase projects create -i "ChatGPT Your Files"
Your Org ID can be found in the URL after selecting an org.
Link your CLI to the project.
npx supabase link --project-ref=<project-id>
You can get the project ID from the general settings page.
Store Supabase URL & public anon key in .env.local
for Next.js.
NEXT_PUBLIC_SUPABASE_URL=<api-url>
NEXT_PUBLIC_SUPABASE_ANON_KEY=<anon-key>
You can get the project API URL and anonymous key from the API settings page.
Create migration file.
npx supabase migration new files
A new file will be created under ./supabase/migrations
.
Within that file, create a private schema.
create schema private;
Add bucket called 'files' via the buckets
table in the storage
schema.
insert into storage.buckets (id, name)
values ('files', 'files')
on conflict do nothing;
Add RLS policies to restrict access to files.
create policy "Authenticated users can upload files"
on storage.objects for insert to authenticated with check (
bucket_id = 'files' and owner = auth.uid()
);
create policy "Users can view their own files"
on storage.objects for select to authenticated using (
bucket_id = 'files' and owner = auth.uid()
);
create policy "Users can update their own files"
on storage.objects for update to authenticated with check (
bucket_id = 'files' and owner = auth.uid()
);
create policy "Users can delete their own files"
on storage.objects for delete to authenticated using (
bucket_id = 'files' and owner = auth.uid()
);
Next let's update ./app/files/page.tsx
to support file upload.
Setup Supabase client at the top of the component.
const supabase = createClientComponentClient();
Handle file upload in the <Input>
's onChange
prop.
await supabase.storage
.from('files')
.upload(`${crypto.randomUUID()}/${selectedFile.name}`, selectedFile);
We can improve our previous RLS policy to require a UUID in the uploaded file path.
Create uuid_or_null()
function.
create or replace function private.uuid_or_null(str text)
returns uuid
language plpgsql
as $$
begin
return str::uuid;
exception when invalid_text_representation then
return null;
end;
$$;
Modify insert policy to check for UUID in the first path segment (Postgres arrays are 1-based).
create policy "Authenticated users can upload files"
on storage.objects for insert to authenticated with check (
bucket_id = 'files' and
owner = auth.uid() and
private.uuid_or_null(path_tokens[1]) is not null
);
Apply the migration to our local database.
npx supabase migration up
or if you are developing directly on the cloud, push your migrations up:
npx supabase db push
Step 2
- DocumentsJump to a step:
Use these commands to jump to the step-2
checkpoint.
git stash push -u -m "my work on step-1"
git checkout step-2
Next we'll need to process our files for retrieval augmented generation (RAG). Specifically we'll split the contents of our markdown documents by heading, which will allow us to query smaller and more meaningful sections.
Let's create a documents
and document_sections
table to store our processed files.
Create migration file.
npx supabase migration new documents
Enable pgvector
and pg_net
extensions.
We'll use pg_net
later to send HTTP requests to our edge functions.
create extension if not exists pg_net with schema extensions;
create extension if not exists vector with schema extensions;
Create documents
table.
create table documents (
id bigint primary key generated always as identity,
name text not null,
storage_object_id uuid not null references storage.objects (id),
created_by uuid not null references auth.users (id) default auth.uid(),
created_at timestamp with time zone not null default now()
);
We'll also create a view documents_with_storage_path
that provides easy access to the storage object path.
create view documents_with_storage_path
with (security_invoker=true)
as
select documents.*, storage.objects.name as storage_object_path
from documents
join storage.objects
on storage.objects.id = documents.storage_object_id;
Create document_sections
table.
create table document_sections (
id bigint primary key generated always as identity,
document_id bigint not null references documents (id),
content text not null,
embedding vector (384)
);
_Note: Since the video was published, on delete cascade
was
added as a new migration so that the lifecycle of document_sections
is tied to their respective document._
alter table document_sections
drop constraint document_sections_document_id_fkey,
add constraint document_sections_document_id_fkey
foreign key (document_id)
references documents(id)
on delete cascade;
Add HNSW index.
Unlike IVFFlat indexes, HNSW indexes can be create immediately on an empty table.
create index on document_sections using hnsw (embedding vector_ip_ops);
Setup RLS to control who has access to which documents.
alter table documents enable row level security;
alter table document_sections enable row level security;
create policy "Users can insert documents"
on documents for insert to authenticated with check (
auth.uid() = created_by
);
create policy "Users can query their own documents"
on documents for select to authenticated using (
auth.uid() = created_by
);
create policy "Users can insert document sections"
on document_sections for insert to authenticated with check (
document_id in (
select id
from documents
where created_by = auth.uid()
)
);
create policy "Users can update their own document sections"
on document_sections for update to authenticated using (
document_id in (
select id
from documents
where created_by = auth.uid()
)
) with check (
document_id in (
select id
from documents
where created_by = auth.uid()
)
);
create policy "Users can query their own document sections"
on document_sections for select to authenticated using (
document_id in (
select id
from documents
where created_by = auth.uid()
)
);
If developing locally, add supabase_url
secret to ./supabase/seed.sql
. We will use this to query our Edge Functions within our local environment.
select vault.create_secret(
'http://api.supabase.internal:8000',
'supabase_url'
);
If you are developing directly on the cloud, open up the SQL Editor and set this to your Supabase project's API URL:
select vault.create_secret(
'<api-url>',
'supabase_url'
);
You can get the project API URL from the API settings page.
Create a function to retrieve the URL.
create function supabase_url()
returns text
language plpgsql
security definer
as $$
declare
secret_value text;
begin
select decrypted_secret into secret_value from vault.decrypted_secrets where name = 'supabase_url';
return secret_value;
end;
$$;
Create a trigger to process new documents when they're inserted. This uses pg_net
to send an HTTP request to our Edge Function (coming up next).
create function private.handle_storage_update()
returns trigger
language plpgsql
as $$
declare
document_id bigint;
result int;
begin
insert into documents (name, storage_object_id, created_by)
values (new.path_tokens[2], new.id, new.owner)
returning id into document_id;
select
net.http_post(
url := supabase_url() || '/functions/v1/process',
headers := jsonb_build_object(
'Content-Type', 'application/json',
'Authorization', current_setting('request.headers')::json->>'authorization'
),
body := jsonb_build_object(
'document_id', document_id
)
)
into result;
return null;
end;
$$;
create trigger on_file_upload
after insert on storage.objects
for each row
execute procedure private.handle_storage_update();
Apply the migration to our local database.
npx supabase migration up
or if you are developing directly on the cloud, push your migrations up:
npx supabase db push
process
Create the Edge Function file.
npx supabase functions new process
This will create the file ./supabase/functions/process/index.ts
.
Make sure you have the latest version of deno
installed
brew install deno
First let's note how dependencies are resolved using an import map - ./supabase/functions/import_map.json
.
Import maps aren't required in Deno, but they can simplify imports and keep dependency versions consistent. All of our dependencies come from NPM, but since we're using Deno we fetch them from a CDN like https://esm.sh or https://cdn.jsdelivr.net.
{
"imports": {
"@std/": "https://deno.land/std@0.168.0/",
"@supabase/supabase-js": "https://esm.sh/@supabase/supabase-js@2.21.0",
"openai": "https://esm.sh/openai@4.10.0",
"common-tags": "https://esm.sh/common-tags@1.8.2",
"ai": "https://esm.sh/ai@2.2.13",
"mdast-util-from-markdown": "https://esm.sh/mdast-util-from-markdown@2.0.0",
"mdast-util-to-markdown": "https://esm.sh/mdast-util-to-markdown@2.1.0",
"mdast-util-to-string": "https://esm.sh/mdast-util-to-string@4.0.0",
"unist-builder": "https://esm.sh/unist-builder@4.0.0",
"mdast": "https://esm.sh/v132/@types/mdast@4.0.0/index.d.ts",
"https://esm.sh/v132/decode-named-character-reference@1.0.2/esnext/decode-named-character-reference.mjs": "https://esm.sh/decode-named-character-reference@1.0.2?target=deno"
}
}
Note: URL based imports and import maps aren't a Deno invention. These are a web standard that Deno follows as closely as possible.
In process/index.ts
, first grab the Supabase environment variables.
import { createClient } from '@supabase/supabase-js';
import { processMarkdown } from '../_lib/markdown-parser.ts';
// These are automatically injected
const supabaseUrl = Deno.env.get('SUPABASE_URL');
const supabaseAnonKey = Deno.env.get('SUPABASE_ANON_KEY');
Deno.serve(async (req) => {
if (!supabaseUrl || !supabaseAnonKey) {
return new Response(
JSON.stringify({
error: 'Missing environment variables.',
}),
{
status: 500,
headers: { 'Content-Type': 'application/json' },
}
);
}
});
Note: These environment variables are automatically injected into the edge runtime for you. Even so, we check for their existence as a TypeScript best practice (type narrowing).
(Optional) If you are using VS Code, you may get prompted to cache your imported dependencies. You can do this by hitting cmd
+shift
+p
and type >Deno: Cache Dependencies
.
Create Supabase client and configure it to inherit the original user’s permissions via the authorization header. This way we can continue to take advantage of our RLS policies.
const authorization = req.headers.get('Authorization');
if (!authorization) {
return new Response(
JSON.stringify({ error: `No authorization header passed` }),
{
status: 500,
headers: { 'Content-Type': 'application/json' },
}
);
}
const supabase = createClient(supabaseUrl, supabaseAnonKey, {
global: {
headers: {
authorization,
},
},
auth: {
persistSession: false,
},
});
Grab the document_id
from the request body and query it.
const { document_id } = await req.json();
const { data: document } = await supabase
.from('documents_with_storage_path')
.select()
.eq('id', document_id)
.single();
if (!document?.storage_object_path) {
return new Response(
JSON.stringify({ error: 'Failed to find uploaded document' }),
{
status: 500,
headers: { 'Content-Type': 'application/json' },
}
);
}
Use the Supabase client to download the file by storage path.
const { data: file } = await supabase.storage
.from('files')
.download(document.storage_object_path);
if (!file) {
return new Response(
JSON.stringify({ error: 'Failed to download storage object' }),
{
status: 500,
headers: { 'Content-Type': 'application/json' },
}
);
}
const fileContents = await file.text();
Process the markdown file and store the resulting subsections into the document_sections
table.
Note: processMarkdown()
is pre-built into this repository for convenience. Feel free to read through its code to learn how it splits the markdown content.
const processedMd = processMarkdown(fileContents);
const { error } = await supabase.from('document_sections').insert(
processedMd.sections.map(({ content }) => ({
document_id,
content,
}))
);
if (error) {
console.error(error);
return new Response(
JSON.stringify({ error: 'Failed to save document sections' }),
{
status: 500,
headers: { 'Content-Type': 'application/json' },
}
);
}
console.log(
`Saved ${processedMd.sections.length} sections for file '${document.name}'`
);
Return a success response.
return new Response(null, {
status: 204,
headers: { 'Content-Type': 'application/json' },
});
If developing locally, open a new terminal and serve the edge functions.
npx supabase functions serve
Note: Local Edge Functions are automatically served as part of npx supabase start
, but this command allows us to also monitor their logs.
If you're developing directly on the cloud, deploy your edge function:
npx supabase functions deploy
Let's update ./app/files/page.tsx
to list out the uploaded documents.
At the top of the component, fetch documents using the useQuery
hook:
const { data: documents } = useQuery(['files'], async () => {
const { data, error } = await supabase
.from('documents_with_storage_path')
.select();
if (error) {
toast({
variant: 'destructive',
description: 'Failed to fetch documents',
});
throw error;
}
return data;
});
In each document's onClick
handler, download the respective file.
const { data, error } = await supabase.storage
.from('files')
.createSignedUrl(document.storage_object_path, 60);
if (error) {
toast({
variant: 'destructive',
description: 'Failed to download file. Please try again.',
});
return;
}
window.location.href = data.signedUrl;
Step 3
- EmbeddingsJump to a step:
Use these commands to jump to the step-3
checkpoint.
git stash push -u -m "my work on step-2"
git checkout step-3
Now let's add logic to generate embeddings automatically anytime new rows are added to the document_sections
table.
Create migration file
npx supabase migration new embed
Create embed()
trigger function. We'll use this general purpose trigger function to asynchronously generate embeddings on arbitrary tables using a new embed
Edge Function (coming up).
create function private.embed()
returns trigger
language plpgsql
as $$
declare
content_column text = TG_ARGV[0];
embedding_column text = TG_ARGV[1];
batch_size int = case when array_length(TG_ARGV, 1) >= 3 then TG_ARGV[2]::int else 5 end;
timeout_milliseconds int = case when array_length(TG_ARGV, 1) >= 4 then TG_ARGV[3]::int else 5 * 60 * 1000 end;
batch_count int = ceiling((select count(*) from inserted) / batch_size::float);
begin
-- Loop through each batch and invoke an edge function to handle the embedding generation
for i in 0 .. (batch_count-1) loop
perform
net.http_post(
url := supabase_url() || '/functions/v1/embed',
headers := jsonb_build_object(
'Content-Type', 'application/json',
'Authorization', current_setting('request.headers')::json->>'authorization'
),
body := jsonb_build_object(
'ids', (select json_agg(ds.id) from (select id from inserted limit batch_size offset i*batch_size) ds),
'table', TG_TABLE_NAME,
'contentColumn', content_column,
'embeddingColumn', embedding_column
),
timeout_milliseconds := timeout_milliseconds
);
end loop;
return null;
end;
$$;
Add embed trigger to document_sections
table
create trigger embed_document_sections
after insert on document_sections
referencing new table as inserted
for each statement
execute procedure private.embed(content, embedding);
Note we pass 2 trigger arguments to embed()
:
There are also 2 more optional trigger arguments available:
create trigger embed_document_sections
after insert on document_sections
referencing new table as inserted
for each statement
execute procedure private.embed(content, embedding, 5, 300000);
Feel free to adjust these according to your needs. A larger batch size will require a longer timeout per request, since each invocation will have more embeddings to generate. A smaller batch size can use a lower timeout.
Apply the migration to our local database.
npx supabase migration up
or if you are developing directly on the cloud, push your migrations up:
npx supabase db push
embed
Create edge function file.
npx supabase functions new embed
In embed/index.ts
, create an inference session using Supabase's AI inference engine.
// Setup type definitions for built-in Supabase Runtime APIs
/// <reference types="https://esm.sh/@supabase/functions-js/src/edge-runtime.d.ts" />
import { createClient } from '@supabase/supabase-js';
const model = new Supabase.ai.Session('gte-small');
Note: The original code from the video tutorial used Transformers.js to perform inference in the Edge Function. We've since released Supabase.ai APIs that can perform inference natively within the runtime itself (vs. WASM) which is faster and uses less CPU time.
Just like before, grab the Supabase variables and check for their existence (type narrowing).
// These are automatically injected
const supabaseUrl = Deno.env.get('SUPABASE_URL');
const supabaseAnonKey = Deno.env.get('SUPABASE_ANON_KEY');
Deno.serve(async (req) => {
if (!supabaseUrl || !supabaseAnonKey) {
return new Response(
JSON.stringify({
error: 'Missing environment variables.',
}),
{
status: 500,
headers: { 'Content-Type': 'application/json' },
}
);
}
});
Create a Supabase client and configure to inherit the user’s permissions.
const authorization = req.headers.get('Authorization');
if (!authorization) {
return new Response(
JSON.stringify({ error: `No authorization header passed` }),
{
status: 500,
headers: { 'Content-Type': 'application/json' },
}
);
}
const supabase = createClient(supabaseUrl, supabaseAnonKey, {
global: {
headers: {
authorization,
},
},
auth: {
persistSession: false,
},
});
Fetch the text content from the specified table/column.
const { ids, table, contentColumn, embeddingColumn } = await req.json();
const { data: rows, error: selectError } = await supabase
.from(table)
.select(`id, ${contentColumn}` as '*')
.in('id', ids)
.is(embeddingColumn, null);
if (selectError) {
return new Response(JSON.stringify({ error: selectError }), {
status: 500,
headers: { 'Content-Type': 'application/json' },
});
}
Generate an embedding for each piece of text and update the respective rows.
for (const row of rows) {
const { id, [contentColumn]: content } = row;
if (!content) {
console.error(`No content available in column '${contentColumn}'`);
continue;
}
const output = (await model.run(content, {
mean_pool: true,
normalize: true,
})) as number[];
const embedding = JSON.stringify(output);
const { error } = await supabase
.from(table)
.update({
})
.eq('id', id);
if (error) {
console.error(
`Failed to save embedding on '${table}' table with id ${id}`
);
}
console.log(
`Generated embedding ${JSON.stringify({
table,
id,
contentColumn,
embeddingColumn,
})}`
);
}
Return a success response.
return new Response(null, {
status: 204,
headers: { 'Content-Type': 'application/json' },
});
If you're developing directly on the cloud, deploy your edge function:
npx supabase functions deploy
Step 4
- ChatJump to a step:
Use these commands to jump to the step-4
checkpoint.
git stash push -u -m "my work on step-3"
git checkout step-4
Finally, let's implement the chat functionality. For this workshop, we're going to generate our query embedding client side using a new custom hook called usePipeline()
.
Install dependencies
npm i @xenova/transformers ai
We'll use Transformers.js to perform inference directly in the browser.
Configure next.config.js
to support Transformers.js
webpack: (config) => {
config.resolve.alias = {
...config.resolve.alias,
sharp$: false,
'onnxruntime-node$': false,
};
return config;
},
Import dependencies
import { usePipeline } from '@/lib/hooks/use-pipeline';
import { createClientComponentClient } from '@supabase/auth-helpers-nextjs';
import { useChat } from 'ai/react';
Note: usePipeline()
was pre-built into this repository for convenience. It uses Web Workers to asynchronously generate embeddings in another thread using Transformers.js.
Create a Supabase client in chat/page.tsx
.
const supabase = createClientComponentClient();
Create embedding pipeline.
const generateEmbedding = usePipeline(
'feature-extraction',
'Supabase/gte-small'
);
Note: it's important that the embedding model you set here matches the model used in the Edge Function, otherwise your future matching logic will be meaningless.
_Transformers.js requires models to exist in the ONNX format. Specifically
the Hugging Face model you specify in the pipeline must have an .onnx
file
under the ./onnx
folder, otherwise you will see the error
Could not locate file [...] xxx.onnx
. Check out
this explanation for more details.
To convert an existing model (eg. PyTorch, Tensorflow, etc) to ONNX, see
the custom usage documentation._
Manage chat messages and state with useChat()
.
const { messages, input, handleInputChange, handleSubmit, isLoading } =
useChat({
api: `${process.env.NEXT_PUBLIC_SUPABASE_URL}/functions/v1/chat`,
});
Note: useChat()
is a convenience hook provided by Vercel's ai
package to handle chat message state and streaming. We'll point it to an Edge Function called chat
(coming up).
Set the ready status to true
when pipeline has loaded.
const isReady = !!generateEmbedding;
Connect input
and handleInputChange
to our <Input>
props.
<Input
type="text"
autoFocus
placeholder="Send a message"
value={input}
onChange={handleInputChange}
/>
Generate an embedding and submit messages on form submit.
if (!generateEmbedding) {
throw new Error('Unable to generate embeddings');
}
const output = await generateEmbedding(input, {
pooling: 'mean',
normalize: true,
});
const embedding = JSON.stringify(Array.from(output.data));
const {
data: { session },
} = await supabase.auth.getSession();
if (!session) {
return;
}
handleSubmit(e, {
options: {
headers: {
authorization: `Bearer ${session.access_token}`,
},
body: {
embedding,
},
},
});
Disable send button until the component is ready.
<Button type="submit" disabled={!isReady}>
Send
</Button>
Create migration file for a new match function
npx supabase migration new match
Create a match_document_sections
Postgres function.
create or replace function match_document_sections(
embedding vector(384),
match_threshold float
)
returns setof document_sections
language plpgsql
as $$
#variable_conflict use_variable
begin
return query
select *
from document_sections
where document_sections.embedding <#> embedding < -match_threshold
order by document_sections.embedding <#> embedding;
end;
$$;
This function uses pgvector's negative inner product operator <#>
to perform similarity search. Inner product requires less computations than other distance functions like cosine distance <=>
, and therefore provides better query performance.
Note: Our embeddings are normalized, so inner product and cosine similarity are equivalent in terms of output. Note though that pgvector's <=>
operator is cosine distance, not cosine similarity, so inner product == 1 - cosine distance
.
We also filter by a match_threshold
in order to return only the most relevant results (1 = most similar, -1 = most dissimilar).
_Note: match_threshold
is negated because <#>
is a negative inner product. See the pgvector docs for more details on why <#>
is negative._
Apply the migration to our local database.
npx supabase migration up
or if you are developing directly on the cloud, push your migrations up:
npx supabase db push
chat
Edge FunctionNote: In this tutorial we use models provided by OpenAI to implement the chat logic. However since making this tutorial, many new LLM providers exist, such as:
Whichever provider you choose, you can reuse the code below (that uses the OpenAI lib) as long as they offer an OpenAI-compatible API (all of providers listed above do). We'll discuss how to do this in each step using Ollama, but the same logic applies to the other providers.
First generate an API key from OpenAI and save it in supabase/functions/.env
.
cat > supabase/functions/.env <<- EOF
OPENAI_API_KEY=<your-api-key>
EOF
Create the edge function file.
npx supabase functions new chat
Load the OpenAI and Supabase keys.
import { createClient } from '@supabase/supabase-js';
import { OpenAIStream, StreamingTextResponse } from 'ai';
import { codeBlock } from 'common-tags';
import OpenAI from 'openai';
const openai = new OpenAI({
apiKey: Deno.env.get('OPENAI_API_KEY'),
});
// These are automatically injected
const supabaseUrl = Deno.env.get('SUPABASE_URL');
const supabaseAnonKey = Deno.env.get('SUPABASE_ANON_KEY');
Since our frontend is served at a different domain origin than our Edge Function, we must handle cross origin resource sharing (CORS).
export const corsHeaders = {
'Access-Control-Allow-Origin': '*',
'Access-Control-Allow-Headers':
'authorization, x-client-info, apikey, content-type',
};
Deno.serve(async (req) => {
// Handle CORS
if (req.method === 'OPTIONS') {
return new Response('ok', { headers: corsHeaders });
}
});
Handle CORS simply by checking for an OPTIONS
HTTP request and returning the CORS headers (*
= allow any domain). In production, consider limiting the origins to specific domains that serve your frontend.
Check for environment variables and create Supabase client.
if (!supabaseUrl || !supabaseAnonKey) {
return new Response(
JSON.stringify({
error: 'Missing environment variables.',
}),
{
status: 500,
headers: { 'Content-Type': 'application/json' },
}
);
}
const authorization = req.headers.get('Authorization');
if (!authorization) {
return new Response(
JSON.stringify({ error: `No authorization header passed` }),
{
status: 500,
headers: { 'Content-Type': 'application/json' },
}
);
}
const supabase = createClient(supabaseUrl, supabaseAnonKey, {
global: {
headers: {
authorization,
},
},
auth: {
persistSession: false,
},
});
The first step of RAG is to perform similarity search using our match_document_sections()
function. Postgres functions are executed using the .rpc()
method.
const { chatId, message, messages, embedding } = await req.json();
const { data: documents, error: matchError } = await supabase
.rpc('match_document_sections', {
embedding,
match_threshold: 0.8,
})
.select('content')
.limit(5);
if (matchError) {
console.error(matchError);
return new Response(
JSON.stringify({
error: 'There was an error reading your documents, please try again.',
}),
{
status: 500,
headers: { 'Content-Type': 'application/json' },
}
);
}
The second step of RAG is to build our prompt, injecting in the relevant documents retrieved from our previous similarity search.
const injectedDocs =
documents && documents.length > 0
? documents.map(({ content }) => content).join('\n\n')
: 'No documents found';
const completionMessages: OpenAI.Chat.Completions.ChatCompletionMessageParam[] =
[
{
role: 'user',
content: codeBlock`
You're an AI assistant who answers questions about documents.
You're a chat bot, so keep your replies succinct.
You're only allowed to use the documents below to answer the question.
If the question isn't related to these documents, say:
"Sorry, I couldn't find any information on that."
If the information isn't available in the below documents, say:
"Sorry, I couldn't find any information on that."
Do not go off topic.
Documents:
${injectedDocs}
`,
},
...messages,
];
Note: the codeBlock
template tag is a convenience function that will strip away indentations in our multiline string. This allows us to format our code nicely while preserving the intended indentation.
Finally, create a completion stream and return it.
const completionStream = await openai.chat.completions.create({
model: 'gpt-3.5-turbo-0125',
messages: completionMessages,
max_tokens: 1024,
temperature: 0,
stream: true,
});
const stream = OpenAIStream(completionStream);
return new StreamingTextResponse(stream, { headers: corsHeaders });
OpenAIStream
and StreamingTextResponse
are convenience helpers from Vercel's ai
package that translate OpenAI's response stream into a format that useChat()
understands on the frontend.
Note: we must also return CORS headers here (or anywhere else we send a response).
If you're developing directly on the cloud, set your OPENAI_API_KEY
secret in the cloud:
npx supabase secrets set OPENAI_API_KEY=<openai-key>
Then deploy your edge function:
npx supabase functions deploy
Let's try it out! Here are some questions you could try asking:
Step 5
- Database Types (Bonus)Jump to a step:
Use these commands to jump to the step-5
checkpoint.
git stash push -u -m "my work on step-4"
git checkout step-5
You may have noticed that all of our DB data coming back from the supabase
client has had an any
type (such as documents
or document_sections
). This isn't great, since we're missing relevant type information and lose type safety (making our app more error-prone).
The Supabase CLI comes with a built-in command to generate TypeScript types based on your database's schema.
Generate TypeScript types based on local DB schema.
supabase gen types typescript --local > supabase/functions/_lib/database.ts
Add the <Database>
generic to all Supabase clients across our project.
In React
import { Database } from '@/supabase/functions/_lib/database';
const supabase = createClientComponentClient<Database>();
import { Database } from '@/supabase/functions/_lib/database';
const supabase = createServerComponentClient<Database>();
In Edge Functions
import { Database } from '../_lib/database.ts';
const supabase = createClient<Database>(...);
Fix type errors 😃
Looks like we found a type error in ./app/files/page.tsx
! Let's add this check to top of the document's click handler (type narrowing).
if (!document.storage_object_path) {
toast({
variant: 'destructive',
description: 'Failed to download file, please try again.',
});
return;
}
🎉 Congrats! You've built your own full stack pgvector app in 2 hours.
If you would like to jump directly to the completed app, simply checkout the main
branch:
git checkout main
Jump to a previous step:
If you've been developing the app locally, follow these instructions to deploy your app to a production Supabase project.
Create a Supabase project at https://database.new, or via the CLI:
npx supabase projects create -i "ChatGPT Your Files"
Link the CLI with your Supabase project.
npx supabase link --project-ref=<project-ref>
You can grab your project's reference ID in your project’s settings.
Push migrations to remote database.
npx supabase db push
Set Edge Function secrets (OpenAI key).
npx supabase secrets set OPENAI_API_KEY=<openai-key>
Deploy Edge Functions.
npx supabase functions deploy
Deploy to Vercel (or CDN of your choice - must support Next.js API routes for authentication).
Be sure to set NEXT_PUBLIC_SUPABASE_URL
and NEXT_PUBLIC_SUPABASE_ANON_KEY
for your Supabase project.
You can find these in your project’s API settings.
Feel free to extend this app in any way you like. Here are some ideas for next steps:
Please file feedback and issues on the on this repo's issue board.