Closed ton77v closed 10 months ago
🤖
Hello @ton77v,
It's good to see you again! I remember our discussions from the previous issue. I'm currently reviewing your new feature request and will be back with a comprehensive response shortly.
🤖
It seems like you've got this under control, if you want help or have specific questions, let me know what I can do for you!
This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.
Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant.
If you want to continue the conversation, start your reply with @dosu-bot.
I could add a pull request once will have this ready
@ton77v A PR for this would be great :)
@ton77v A PR for this would be great :)
Cool, I should start working on a project involving Llama Index and Assistants API in a few days so I'll create one a bit later
Feature Description
OpenAIAssistantAgent could currently be initialized using from_new class method.
This method performs API call to create a new assistant. Later on we access self._assistant.id to execute Run
I propose adding a new method, e.g. "from_existing" with assistant_id (string) as an only parameter. Instead of storing the whole Assistant instance we could only store this ID; it will be returned by "create" API endpoint
Reason
At the moment the only solution to achieve this functionality is to subclass OpenAIAssistantAgent and implement additional method, mocking the self._assistant to use the .id for Runs or retrieving the whole assistant
Value of Feature
We will perform less API calls and instead of creating a new Assistant every time, we re-use existing one saving time and resources. If Open AI limits the Assistants number this will be especially useful.
The same functionality is already implemented by LangChain