JedWalton / lucidify

File Q&A with Vector Databases along with a ChatUI and bespoke datasets. Golang, Nodejs, Python, Weaviate, OpenAI, Postgresql, Automated testing, Docker.
1 stars 0 forks source link

feature/persistchatthreads #18

Open JedWalton opened 10 months ago

JedWalton commented 10 months ago

Implement features to the chatservice to store/persist chat threads on a per user basis.

JedWalton commented 10 months ago

Implementing the frontend with a fork of https://github.com/mckaywrigley/chatbot-ui

JedWalton commented 10 months ago

Created chatbot-ui clone in repo.

Goals: Mimic export/import data in chatbot-ui via our backend.

So when a user sends a message in the frontend, make an api call to our backend, apply appropriate transformations to the chat thread / apply sys prompt and return to the frontend which will handle the chat completion.

Now the frontend has handled the chat completion and has generated a response, this must also be appended to the chat thread on our go api.

This thread will then be able to import conversation history when this user logs out/logs in across many machines. This thread is now persisted.

JedWalton commented 10 months ago

Question:

Is it best to have users enter their own OPENAI_API_KEY in the FE and then use this for generating all embeddings/chat completions or is it better to have 1 on the backend that we use.

Well, when we use the private GPT models, it'll have to be on a per user AZURE_OPENAI_API_KEY anyway, so it makes sense to take the low hanging fruit of the existing openai key.

Question: How many concurrent clients can a weaviate instance manage with one API key?

JedWalton commented 10 months ago

Example flow would be to get the Client to get an API key from azure. Client uses this in the frontend/attached to their account. This is used for generating all embeddings/chat completion securely.

We offer a long term memory/vector db/SOTA prompting techniques to allow users to speak with their private data corpus and many other utilities to save time.

JedWalton commented 10 months ago

To enable private LLM functionality, enable companies to use their private azure_openai_api_key. This is another ticket.

This ticket is getting the UI working and persisting threads.

JedWalton commented 10 months ago

So this will need to instantiate (potentially openai client although this chat thread is managed in the FE so may not be necessary) and definitely a weaviate client on the backend to use the private azure open ai api key.

Data models need to be created for persisting LocalStorage in the FE. This will be achieved with the storageService.ts in chatbot-ui. This must include the private azure openai key.

Usage statistics must be recorded for each user.

Users must belong to an organisation. Organisations will have one api key per organisation and one corpus of data per organisation. This will be managed by the lucidify service.

The chatbot-ui allows users to login, and get access to their personal conversation history, and access their organisations chat model.