nsbradford / SemanticSearch

Minimal RAG (Retrieval Augmented Generation) website with Pinecone, FastAPI, NextJS, MongoDB
https://semantic-search-six.vercel.app
9 stars 3 forks source link

Add session ID to llm request metadata #30

Closed nsbradford closed 1 year ago

nsbradford commented 1 year ago

Get sessionId from the frontend in index.tsx and pass it through sendLLMRequest in api.ts. Add metadata to llm endpoint in main.py and pass to llm.py - you might have to edit models.py accordingly.

ellipsis-dev[bot] commented 1 year ago

To have BitBuilder create a Pull Request with the implementation, the user who created the issue (@nsbradford) can comment below with "LGTM". If the plan is inaccurate, edit the issue description and write a comment with "replan".


Implementation Steps

  1. Add sessionId to LLMChatCompletionRequest interface
    • In the /ui/shared/api.ts file, add a new field sessionId to the LLMChatCompletionRequest interface. The sessionId should be of type string.
  2. Pass sessionId to sendLLMRequest
    • In the /ui/pages/index.tsx file, modify the sendLLMRequest function call inside the handleNewUserPrompt function to pass the sessionId along with the existing data.
  3. Add sessionId to LLMChatCompletionRequest model
    • In the /backend/models.py file, add a new field sessionId to the LLMChatCompletionRequest class. The sessionId should be of type str.
  4. Modify llm endpoint to accept sessionId
    • In the /backend/main.py file, modify the llm endpoint to accept sessionId in the request body. Extract the sessionId from the request and pass it to the llm_get function.
  5. Modify llm_get function to accept sessionId
    • In the /backend/llm.py file, modify the llm_get function to accept sessionId as an argument. Add the sessionId to the metadata dictionary that is passed to the acompletion function.

Generated with :heart: by www.bitbuilder.ai. Questions? Check out our the documentation.

nsbradford commented 1 year ago

lgtm