Zaki-1052 / GPTPortal

A feature-rich portal to chat with GPT-4, Claude, Gemini, Mistral, & OpenAI Assistant APIs via a lightweight Node.js web app; supports customizable multimodality for voice, images, & files.
http://localhost:3000/portal
MIT License
303 stars 55 forks source link

Adding support for OpenAi custom assistants? #5

Closed FarisZR closed 8 months ago

FarisZR commented 8 months ago

First, thanks for the great project! It would be great if you could add support for the assistant API, which allows users to use the custom assistants they created on the dev platform. Assistants unlike normal ChatGPT aren't just custom prompts, they can read files like CSVs, which can't be easily replicated with a system prompt.

https://platform.openai.com/docs/assistants/overview

Zaki-1052 commented 8 months ago

Thanks! I'm actually aware already, and while I'm a bigger proponent of the manual handling and embedding models if I can get them to work, the Assistants API Branch is in progress as mentioned in the release. I've been busy with classes lately, but I intend to commit my changes this weekend once I add multiple file object handling for custom assistants. I'll update/create a branch and close the issue once I push the new version! Thanks for your feedback and the suggestion.

Zaki-1052 commented 8 months ago

I just pushed my changes to the new assistants branch. It retains the same functionality, though I removed some of the other models that don't apply, like Gemini and Vision, since the former seems unnecessary for this specific function and the latter doesn't work with the Assistants API. You can chat and select models as normal, and also upload files for GPT to access with its code interpreter. I did kind of want to figure out embeddings on my main branch, which I'll try to do this weekend, but if I'm too busy then I'll just make a new "Assistants Mode" on main and merge my relevant changes. There are still a few quality changes to make, and I need to ensure some other features like Voice Mode weren't broken, but I can confirm that the regular assistant creation and chat, along with file uploads, all work perfectly, as well as the export, which also ends the server. Will write some documentation before tomorrow. Hope you like it and the changes coming!

FarisZR commented 8 months ago

Great! i will test it and report back

FarisZR commented 8 months ago

Hey, can you give me more details about how to use this? Where do I specify the assistant, do I have to get a special api key? Etc. as the branch's readme is the same as main.

Zaki-1052 commented 8 months ago

Right, sorry, I forgot to do the documentation! The .env file is the same, just put your OpenAI key in the same format as the .env.example file. You don’t need to specify the assistant, though you can edit its instructions like before in instructions.md. Once you send the first message (or upload a file, whichever comes first), it creates the assistant with the system prompt and the model that’s selected (defaults to gpt-4-turbo-preview); begins a thread, and adds your message to the thread.

Then it starts a run, receiving and displaying the response, and any subsequent messages you send will be added to the thread of the assistant created in the first message. You can continue to add files with the top-right file button, and it’ll be created for the assistant of that server instance. You can inspect the different assistants in your OpenAI dashboard, a new one is created when you start the server. I think that’s all; no special API Key or anything like that (pricing for code environment is purple on top of the model costs in usage).

Let me know if I forgot anything; will put it in the ReadMe this afternoon!

FarisZR commented 8 months ago

Great! but can i use an already existing assistant? or will it create a new assistant on every new chat?

Zaki-1052 commented 8 months ago

It’ll create a new Assistant on every new chat. You can go into the code and specify the ID if you like, for now they only persist across a single server instance. I can definitely add something later where you can choose from a previous one, though I don’t want to introduce too much database bloat or anything like that.

FarisZR commented 8 months ago

it would be good enough if we can just specify the assistant ID to be used, it's displayed in the openai dashboard. it doesn't have to save the messages.

Zaki-1052 commented 8 months ago

Alright, thanks for the feedback! Will add a conditional then that takes the environment variable of the ID for the assistant, otherwise creates a new one; hopefully can push tonight!

FarisZR commented 8 months ago

yup that would be great! since it should automatically import the prompt and any files and additional settings from openAI

Zaki-1052 commented 8 months ago

@FarisZR Hey! Since you're the one who asked, I thought I'd just get your quick feedback before I push the change and write the documentation to make sure I'm getting your desired functionality right! Basically, the new code will check if you have an Assistant ID stored in the .env file, and if there is one, then it will start a new thread (NOT continue the same conversation) but with the settings of that Assistant (custom instructions, model, and specified system message from when it was created). The code change is literally just that:

// Check if the Assistant ID is provided via .env and use it
  if (ASSISTANT_ID && !thread) {
    assistant = await openai.beta.assistants.retrieve(
      ASSISTANT_ID
    );
    thread = await openai.beta.threads.create();
    console.log("Using existing Assistant ID from .env, Thread ensured:", thread);
  } else if (!assistant || !thread) {
    // Create a new Assistant and Thread if not using an ID from .env
    assistant = await openai.beta.assistants.create({

So the terminal output would look like:

zakiralibhai@My-MacBook-Air PortalTest % node server.js Server running at http://localhost:3000 Initialize: true Assistant and Thread ensured: { id: 'asst_DwaJp4C2hGmnCx6Amck7OedJ', object: 'assistant', created_at: 1707000885, name: 'Assistant', description: null, model: 'gpt-3.5-turbo', instructions: 'You are a helpful and intelligent AI assistant, knowledgeable about a wide range of topics and highly capable of a great many tasks.", tools: [ { type: 'code_interpreter' } ], file_ids: [], metadata: {} } { id: 'thread_ylLE44suL9Vx0Yqm3Yj7ANL8', object: 'thread', created_at: 1707000885, metadata: {} } Run Status: in_progress Run Status: completed Response: Hello! How can I assist you today? Try Response: { text: 'Hello! How can I assist you today?' } ^C zakiralibhai@My-MacBook-Air PortalTest % node server.js Server running at http://localhost:3000 Initialize: true Using existing Assistant ID from .env, Thread ensured: { id: 'thread_bIgB6nBeo3zwCNjakev4ZjM2', object: 'thread', created_at: 1707000949, metadata: {} } Run Status: in_progress Run Status: completed Response: Hello! How can I assist you today? Try Response: { text: 'Hello! How can I assist you today?' } ^C zakiralibhai@My-MacBook-Air PortalTest %

and in the .env file, I just had:

ASSISTANT_ID=asst_DwaJp4C2hGmnCx6Amck7OedJ

Is there anything else you're looking for with this feature? You have to know and input the ID before you run the server, otherwise it will just create a new Assistant per usual.


EDIT: Actually, I was inspecting my dashboard and didn't realize that OAI stores the threads on their server as well (you have to adjust your settings to show it to organization members), so you can use previous threads as well, I'll include the explanations in the new readme when I write it. New test:

zakiralibhai@My-MacBook-Air PortalTest % node server.js
Server running at http://localhost:3000
Initialize: true
Using existing Assistant ID {
  id: 'asst_DwaJp4C2hGmnCx6Amck7OedJ',
  object: 'assistant',
  created_at: 1707000885,
  name: 'Assistant',
  description: null,
  model: 'gpt-3.5-turbo',
  instructions: 'You are a helpful and intelligent AI assistant, knowledgeable about a wide range of topics and highly capable of a great many tasks.",
  tools: [ { type: 'code_interpreter' } ],
  file_ids: [],
  metadata: {}
}
New Thread ensured: {
  id: 'thread_Jyw2woT80ie4t9vglMCpCVjD',
  object: 'thread',
  created_at: 1707002090,
  metadata: {}
}
Run Status: in_progress
Run Status: completed
Response: Hello! How can I assist you today?
Try Response: { text: 'Hello! How can I assist you today?' }
^C
zakiralibhai@My-MacBook-Air PortalTest % node server.js
Server running at http://localhost:3000
Initialize: true
Creating new Assistant: {
  id: 'asst_ECizyPzSh9hUliOmOQID1oIZ',
  object: 'assistant',
  created_at: 1707002118,
  name: 'Assistant',
  description: null,
  model: 'gpt-3.5-turbo',
  instructions: 'You are a helpful and intelligent AI assistant, knowledgeable about a wide range of topics and highly capable of a great many tasks.",
  tools: [ { type: 'code_interpreter' } ],
  file_ids: [],
  metadata: {}
}
New Thread ensured: {
  id: 'thread_yOsgUSNepAgqm5S54iwiIP4m',
  object: 'thread',
  created_at: 1707002118,
  metadata: {}
}
Run Status: in_progress
Run Status: completed
Response: Hello! How can I assist you today?
Try Response: { text: 'Hello! How can I assist you today?' }
^C
zakiralibhai@My-MacBook-Air PortalTest % node server.js
Server running at http://localhost:3000
Initialize: true
Using existing Assistant ID {
  id: 'asst_ECizyPzSh9hUliOmOQID1oIZ',
  object: 'assistant',
  created_at: 1707002118,
  name: 'Assistant',
  description: null,
  model: 'gpt-3.5-turbo',
  instructions: 'You are a helpful and intelligent AI assistant, knowledgeable about a wide range of topics and highly capable of a great many tasks.",
  tools: [ { type: 'code_interpreter' } ],
  file_ids: [],
  metadata: {}
}
Using existing Thread ID from .env {
  id: 'thread_yOsgUSNepAgqm5S54iwiIP4m',
  object: 'thread',
  created_at: 1707002118,
  metadata: {}
}
Run Status: in_progress
Run Status: in_progress
Run Status: completed
Response: Thank you for asking! As an AI, I don't have feelings, but I'm here and ready to help you with any questions or tasks you have. How can I assist you today?
Try Response: {
  text: "Thank you for asking! As an AI, I don't have feelings, but I'm here and ready to help you with any questions or tasks you have. How can I assist you today?"
}
^C
zakiralibhai@My-MacBook-Air PortalTest % 

Code:

// Assistant Handling

// At the top of server.js, after reading environment variables
const ASSISTANT_ID = process.env.ASSISTANT_ID || null;
const THREAD_ID = process.env.THREAD_ID || null;

let systemMessage = null;  // Global variable for systemMessage
let assistant = null;
let thread = null;
let response = '';
let initialize = true;
let messages;

// Utility function to ensure Assistant and Thread initialization
async function AssistantAndThread(modelID) {
  // Conditional logic to either use provided IDs or create new instances
  if (!assistant && ASSISTANT_ID) {
    // Set the assistant using the ID from the environment
    assistant = await openai.beta.assistants.retrieve(
      ASSISTANT_ID
    );
    console.log("Using existing Assistant ID", assistant)
  }

  if (!thread && THREAD_ID) {
    // Directly use the provided Thread ID without creating a new thread
    thread = await openai.beta.threads.retrieve(
      THREAD_ID
    );
    console.log("Using existing Thread ID from .env", thread);
  } else if (!thread) {
    // Only create a new thread if it's not provided and an assistant exists
    // This could mean creating a new assistant if one wasn't provided
    if (!assistant) {
      assistant = await openai.beta.assistants.create({
        name: "Assistant",
        instructions: systemMessage,
        tools: [{ type: "code_interpreter" }],
        model: modelID
      });
      console.log("Creating new Assistant:", assistant)
    }
    thread = await openai.beta.threads.create();
    console.log("New Thread ensured:", thread);
  }
}

.env is just:

ASSISTANT_ID=asst_ECizyPzSh9hUliOmOQID1oIZ
THREAD_ID=thread_yOsgUSNepAgqm5S54iwiIP4m

along with my key and such, of course.

FarisZR commented 8 months ago

Yup that should be it if it works with assistants created in the openai dashboard, do I have to supply the prompt locally or will it load the prompt, file IDs and other stuff automatically from openai?

-------- Ursprüngliche Nachricht -------- Am 04.02.24 00:02, schrieb Zaki-1052 - notifications at github.com :

@.***(https://github.com/FarisZR) Hey! Since you're the one who asked, I thought I'd just get your quick feedback before I push the change and write the documentation to make sure I'm getting your desired functionality right! Basically, the new code will check if you have an Assistant ID stored in the .env file, and if there is one, then it will start a new thread (NOT continue the same conversation) but with the settings of that Assistant (custom instructions, model, and specified system message from when it was created). The code change is literally just that:

// Check if the Assistant ID is provided via .env and use it

if

(

ASSISTANT_ID

&&

!

thread

)

{

assistant

=

await

openai

.

beta

.

assistants

.

retrieve

(

ASSISTANT_ID

)

;

thread

=

await

openai

.

beta

.

threads

.

create

(

)

;

console

.

log

(

"Using existing Assistant ID from .env, Thread ensured:"

,

thread

)

;

}

else

if

(

!

assistant

||

!

thread

)

{

// Create a new Assistant and Thread if not using an ID from .env

assistant

=

await

openai

.

beta

.

assistants

.

create

(

{

So the terminal output would look like:

@. PortalTest % node server.js Server running at http://localhost:3000 Initialize: true Assistant and Thread ensured: { id: 'asst_DwaJp4C2hGmnCx6Amck7OedJ', object: 'assistant', created_at: 1707000885, name: 'Assistant', description: null, model: 'gpt-3.5-turbo', instructions: 'You are a helpful and intelligent AI assistant, knowledgeable about a wide range of topics and highly capable of a great many tasks.", tools: [ { type: 'code_interpreter' } ], file_ids: [], metadata: {} } { id: 'thread_ylLE44suL9Vx0Yqm3Yj7ANL8', object: 'thread', created_at: 1707000885, metadata: {} } Run Status: in_progress Run Status: completed Response: Hello! How can I assist you today? Try Response: { text: 'Hello! How can I assist you today?' } ^C @. PortalTest % node server.js Server running at http://localhost:3000 Initialize: true Using existing Assistant ID from .env, Thread ensured: { id: 'thread_bIgB6nBeo3zwCNjakev4ZjM2', object: 'thread', created_at: 1707000949, metadata: {} } Run Status: in_progress Run Status: completed Response: Hello! How can I assist you today? Try Response: { text: 'Hello! How can I assist you today?' } ^C @.*** PortalTest %

and in the .env file, I just had:

ASSISTANT_ID=asst_DwaJp4C2hGmnCx6Amck7OedJ

Is there anything else you're looking for with this feature? You have to know and input the ID before you run the server, otherwise it will just create a new Assistant per usual.

— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you were mentioned.Message ID: @.***>

Zaki-1052 commented 8 months ago

It'll all load automatically from OpenAI, you just need the ID of the Assistant/Thread and the prompt, files, etc. are all loaded from there. Haven't tested from the dashboard but worked when I created the Assistant in my command line. Just pushed, enjoy!

FarisZR commented 8 months ago

Hey, it seems you changed something and the program now listens only to [::1]:3000, this wouldn't work in docker, it needs to listen on 0.0.0.0:3000.

This seems to only affect the assistants branch