Closed nsbradford closed 1 year ago
To have BitBuilder create a Pull Request with the implementation, the user who created the issue (@nsbradford) can comment below with "LGTM". If the plan is inaccurate, edit the issue description and write a comment with "replan".
/ui/pages/index.tsx
file, modify the handleNewUserPrompt
function to include the sessionId
in the sendLLMRequest
function call. The new function call should look like this: sendLLMRequest({ model: 'gpt-3.5-turbo', messages: buildSummarizationPrompt(content, serverResponseMsg.results), sessionId: sessionId })
/ui/shared/api.ts
file, modify the sendLLMRequest
function to include sessionId
in the request payload. The new payload should look like this: { model: data.model, messages: data.messages, sessionId: data.sessionId }
/backend/models.py
file, modify the LLMChatCompletionRequest
model to include sessionId
as a new field. The updated model should look like this:
class LLMChatCompletionRequest(BaseModel):
model: str
messages: List[LLMChatCompletionMessage]
sessionId: str
/backend/main.py
file, modify the llm
endpoint to extract sessionId
from the request payload. The updated endpoint should look like this:
@app.post("/llm/")
async def llm(request: LLMChatCompletionRequest):
result = await llm_get(request.model, request.messages, request.sessionId)
return { 'text': result }
/backend/llm.py
file, modify the llm_get
function to include sessionId
as a parameter and in the metadata of the acompletion
function call. The updated function should look like this:
async def llm_get(model: str, messages: List[LLMChatCompletionMessage], sessionId: str) -> str:
print(f"Calling LLM {model}")
response = await acompletion(
model=model,
messages=[m.dict() for m in messages],
temperature=0,
metadata={"environment": getEnvironment(), "sessionId": sessionId},
)
text = response.choices[0].message.content
print(f"LLM response: {response.choices[0].message.content}")
return text
Generated with :heart: by www.bitbuilder.ai. Questions? Check out our the documentation.
lgtm
Get sessionId from the frontend in
index.tsx
and pass it throughsendLLMRequest
inapi.ts
. Add metadata to llm endpoint inmain.py
and pass tollm.py
- you might have to editmodels.py
accordingly.