nsbradford / SemanticSearch

Minimal RAG (Retrieval Augmented Generation) website with Pinecone, FastAPI, NextJS, MongoDB
https://semantic-search-six.vercel.app
9 stars 3 forks source link

[BitBuilder] Add session ID to llm request metadata #37

Closed ellipsis-dev[bot] closed 3 months ago

ellipsis-dev[bot] commented 1 year ago

Summary:

Issue: https://github.com/nsbradford/SemanticSearch/issues/36

Implementation:

  1. Add sessionId to the frontend
    • In the /ui/pages/index.tsx file, modify the handleNewUserPrompt function to include the sessionId in the sendLLMRequest function call. The sessionId is already being retrieved at the start of the PromptPage function, so it can be passed directly to the sendLLMRequest function. The modified sendLLMRequest call should look like this: const llmSummary = await sendLLMRequest({ model: 'gpt-3.5-turbo', messages: buildSummarizationPrompt(content, serverResponseMsg.results), sessionId: sessionId })
  2. Modify sendLLMRequest function to accept sessionId
    • In the /ui/shared/api.ts file, modify the sendLLMRequest function to accept sessionId as a parameter. The modified function should look like this: export async function sendLLMRequest(data: LLMChatCompletionRequest, sessionId: string): Promise<string> {...}. Also, include sessionId in the request payload: const response = await axios.post<{text: string}>(${backendRootUrl}/llm/${sessionId}, data);
  3. Modify LLMChatCompletionRequest model to include sessionId
    • In the /backend/models.py file, modify the LLMChatCompletionRequest model to include sessionId as a field. The modified model should look like this: class LLMChatCompletionRequest(BaseModel): model: str; messages: List[LLMChatCompletionMessage]; sessionId: str
  4. Modify llm endpoint to accept sessionId
    • In the /backend/main.py file, modify the llm endpoint to accept sessionId as a parameter. The modified endpoint should look like this: @app.post('/llm/{sessionId}'). Also, modify the llm function to pass sessionId to the llm_get function: result = await llm_get(request.model, request.messages, sessionId)
  5. Modify llm_get function to accept sessionId
    • In the /backend/llm.py file, modify the llm_get function to accept sessionId as a parameter. The modified function should look like this: async def llm_get(model: str, messages: List[LLMChatCompletionMessage], sessionId: str) -> str. Also, include sessionId in the metadata of the acompletion call: metadata={"environment": getEnvironment(), "sessionId": sessionId}

Plan Feedback: Approved by @nsbradford

Something look wrong?: If this Pull Request doesn't contain the expected changes, add more information to #36. Then, add the bitbuilder:create label to try again. For more information, check the documentation.

Generated with :heart: by www.bitbuilder.ai

vercel[bot] commented 1 year ago

The latest updates on your projects. Learn more about Vercel for Git ↗︎

Name Status Preview Comments Updated (UTC)
semantic-search-mini ❌ Failed (Inspect) Sep 26, 2023 8:03pm
ellipsis-dev[bot] commented 1 year ago

Sorry, BitBuilder encountered an error while addressing comments in this Pull Request. Please try again later. (wflow_hTrxhBkVLD7U4m7d) :robot: