getappmap / appmap-js

Client libraries for AppMap
48 stars 17 forks source link

Truncate messages to avoid overflow #2007

Open kgilpin opened 4 hours ago

kgilpin commented 4 hours ago

When we overflow the LLM token limit, truncate the user message and retry.

For a specific example, see here:

https://github.com/getappmap/navie-benchmark/issues/38

--

In the navie package:

github-actions[bot] commented 3 hours ago

Title

Handle LLM Token Overflow by Truncating User Message and Retrying

Problem

When interacting with the LLM, there are instances where the user message exceeds the token limit. This leads to an overflow, causing system errors or failed invocations. The objective is to manage this token overflow by truncating the user message and retrying the invocation to ensure stability and smooth user experience.

Analysis

The primary challenge is identifying the message that causes the overflow due to exceeding the token limit. Once identified, the message should be truncated to a safe limit and the invocation retried. The solution involves:

  1. Detecting an overflow situation when the LLM interaction fails.
  2. Identifying the longest message (in terms of tokens or characters) from the user's input.
  3. Truncate the identified message to a predefined safe limit, ensuring that the total token count stays within the allowed threshold.
  4. Retrying the invocation with the truncated message.

Given the retry logic already implemented in various parts of the system, notably in the retry.ts utilities, the new functionality for truncating and retrying can follow a similar model.

Proposed Changes

  1. packages/navie/src/llmInteraction.ts:

    • Implement a method to calculate the token count of a message.
    • Add logic to identify the longest user message (in terms of token count).
    • Implement functionality to truncate the longest message to a safe limit.
    • Introduce an error handling mechanism to catch the token overflow exception, truncate the message and retry the invocation.
  2. packages/client/src/retryOnError.ts:

    • Include additional error codes or checks to detect LLM token overflow errors.
    • Update the retry mechanism to call the new truncation logic before retrying.
  3. packages/client/src/retryOn503.ts (if necessary):

    • Add similar checks and retry mechanisms to handle LLM token overflow scenarios.

Specific Code Changes

  1. packages/navie/src/llmInteraction.ts:

    • Add a helper function to calculate the token count of a given message.
    • Extend the existing function to locate the longest user message.
    • Extend existing functionality or add new handlers to truncate the message and retry.
  2. packages/client/src/retryOnError.ts:

    • Update current error handling (retryOnError) to detect specific LLM token overflow errors and execute the truncation logic followed by retry.
  3. packages/client/src/retryOn503.ts:

    • If applicable, ensure that 503 errors related to LLM are also managed by retrying post-truncation.

By integrating the truncation logic within the existing retry handlers and ensuring token overflow errors are gracefully managed, the system's resilience to large inputs will be significantly improved.