microsoft / kernel-memory

RAG architecture: index and query any data using LLM and natural language, track sources, show citations, asynchronous memory patterns.
https://microsoft.github.io/kernel-memory
MIT License
1.34k stars 252 forks source link

This model's maximum context length is 8192 tokens - question #516

Closed dyardy closed 1 month ago

dyardy commented 1 month ago

Context / Scenario

see below

Question

Running the service on my workstation and running dotnet-webclient sample against the service.

I am seeing the following error.

Is this an error as part of the generation of embeddings? I am using azure ada model and do not see any limit setting there. In addition, I am not sure how to limit the length limit setting when generating the embeddings.

Ideas? (much appreciated)

[18:00:24.333] warn: Microsoft.KernelMemory.Search.SearchClient[0] No memories available [18:01:30.133] fail: Microsoft.AspNetCore.Diagnostics.DeveloperExceptionPageMiddleware[1] An unhandled exception has occurred while executing the request. Azure.RequestFailedException: This model's maximum context length is 8192 tokens. However, your messages resulted in 10383 tokens. Please reduce the length of the messages. Status: 400 (model_error) ErrorCode: context_length_exceeded Content: { "error": { "message": "This model's maximum context length is 8192 tokens. However, your messages resulted in 10383 tokens. Please reduce the length of the messages.", "type": "invalid_request_error", "param": "messages", "code": "context_length_exceeded" } }

dyardy commented 1 month ago

Note also when asking any question I also received this error "Azure.RequestFailedException: This model's maximum context length is 8192 tokens. However, your messages resulted in 11815 tokens. Please reduce the length of the messages. Status: 400 (model_error) ErrorCode: context_length_exceeded

Content: { "error": { "message": "This model's maximum context length is 8192 tokens. However, your messages resulted in 11815 tokens. Please reduce the length of the messages.", "type": "invalid_request_error", "param": "messages", "code": "context_length_exceeded" } }"

dyardy commented 1 month ago

I resolved the problem by using gpt4 32k. The issue/problem was related to capturing embeddings and the limit was on the initial side of things. It was more that the question was converted to tokens, and on the question side, it was too large.