supercorp-ai / superinterface

Superinterface is an AI assistants library for building AI capabilities into your app or website. You use React components and hooks to build AI-first assistants-based interfaces like chats and wizards.
https://superinterface.ai
14 stars 0 forks source link

Passing auth data to API #5

Open henrik-foreflight opened 1 month ago

henrik-foreflight commented 1 month ago

This is a super interesting project you are working on; looks great! For the past week, I have been on a Google hunt to figure if anyone was building a UI layer on top of OpenAI assistant APIs, or even better, an abstraction layer that would allow jumping between the different LLM providers. GPTs makes this super fast to prototype, but the jump from there, having to come up with a front-end, or even worse if using other more barebone LLM APIs, having to deal with compressing threads etc on my own seems daunting, so was excited to find Superinterface.

One point of feedback that made me pause on setting up an assistant on super interface was the lack (or my oversight) of the ability to pass some auth headers to the function calls. In my case, and imagine for many others, the API needs to know what user (customer) they are dealing with to look up the right data.

Slightly related, another cool feature would be if there was a hook to inject/append the prompt upon a new thread creation. I want to seed the system prompt with a couple of user preferences and context on the user the assistant is interacting with.

Nedomas commented 1 month ago

Hi @henrik-foreflight , thanks for great ideas! Fortunately, we’re currently working on the first one (or at least a variation of it) and in general are trying to solve both of this as it is being requested by more people.

Regarding API user auth: We’re currently building a way for the assistant to call functions defined in the client side. In your example, it would be possible for you to simply define a function on window or via React hook and the assistant would be able to call it. Serialized results of these client-side function calls would then be passed back to the assistant as the output of the function. Benefit of this is that auth tokens would never leak to Superinterface backend as we would never be calling any authed API on user’s behalf in our backend. Would def love to hear your opinion if this would solve it for you.

If it does not work for you, there’s an alternative trivial feature we could add. We do a bad job explaining how to set up API Request functions, but arguments that assistant passes to the function are also merged into body of the request (POST requests) or into the URL search string params (GET requests). If certain args would be better passed as headers of the request instead of in the body/search string, we could possibly expose a small config layer where you could map which args goes where (headers or body/search string). We are actively working on improving the UX/DX of function calls as we add more of function types.

If none of these sound like a great idea for your use-case, I‘d love to hear how you think it would work best!

Regarding prompt (instructions) injection: Do you see this being done on the front-end (passing context to hooks/components) or the back-end (possibly doing some API requests etc)? Alternative way to go around this would also be having functions that assistant calls to learn about the context it is in. Would love your input.

Anyways, thank you so much