microsoft / azurechat

🤖 💼 Azure Chat Solution Accelerator powered by Azure Open AI Service
MIT License
1.23k stars 1.19k forks source link

Improve handling of bad responses (content filtering) #203

Open BeigeBadger opened 1 year ago

BeigeBadger commented 1 year ago

Current state

User: Enters something that gets flagged by the Azure OpenAI Service content filtering Bot: No response

The backend receives a 400 and doesn't know how to handle it.

Error in handler Handler, handleChainError: Error: Request failed with status code 400 and body {"error":{"message":"The response was filtered due to the prompt triggering Azure OpenAI’s content management policy. Please modify your prompt and retry. To learn more about our content filtering policies please read our documentation: https://go.microsoft.com/fwlink/?linkid=2198766","type":null,"param":"prompt","code":"content_filter","status":400,"innererror":{"code":"ResponsibleAIPolicyViolation","content_filter_result":{"hate":{"filtered":true,"severity":"high"},"self_harm":{"filtered":false,"severity":"safe"},"sexual":{"filtered":false,"severity":"safe"},"violence":{"filtered":true,"severity":"medium"}}}}}
- error node_modules\langchain\dist\util\axios-fetch-adapter.js (351:18) @ createError
log.js:70
- error unhandledRejection: Error: Request failed with status code 400 and body {"error":{"message":"The response was filtered due to the prompt triggering Azure OpenAI’s content management policy. Please modify your prompt and retry. To learn more about our content filtering policies please read our documentation: https://go.microsoft.com/fwlink/?linkid=2198766","type":null,"param":"prompt","code":"content_filter","status":400,"innererror":{"code":"ResponsibleAIPolicyViolation","content_filter_result":{"hate":{"filtered":true,"severity":"high"},"self_harm":{"filtered":false,"severity":"safe"},"sexual":{"filtered":false,"severity":"safe"},"violence":{"filtered":true,"severity":"medium"}}}}}
    at createError (webpack-internal:///(rsc)/./node_modules/langchain/dist/util/axios-fetch-adapter.js:320:19)
    at settle (webpack-internal:///(rsc)/./node_modules/langchain/dist/util/axios-fetch-adapter.js:28:16)
    at eval (webpack-internal:///(rsc)/./node_modules/langchain/dist/util/axios-fetch-adapter.js:162:124)
    at new Promise (<anonymous>)
    at fetchAdapter (webpack-internal:///(rsc)/./node_modules/langchain/dist/util/axios-fetch-adapter.js:157:12)
    at process.processTicksAndRejections (C:\repos\Aware\azurechat\src\lib\internal\process\task_queues.js:95:5)
    at async RetryOperation.eval [as _fn] (webpack-internal:///(rsc)/./node_modules/p-retry/index.js:40:25) {name: 'Error', digest: undefined, stack: 'Error: Request failed with status code 400 an…/(rsc)/./node_modules/p-retry/index.js:40:25)', message: 'Request failed with status code 400 and body…e":{"filtered":true,"severity":"medium"}}}}}', Symbol(NextjsError): 'server'}

Ideal state

Attempts

In terms of developing a solution myself, I got as far as identifying the handlers object in chat-simple-api.ts as providing methods like handleLLMError, handleToolError, and handleChainError, defined here as a likely place to wire into. However, since they all return void, and I want to return StreamingResponse I'm kind of stuck as to where to go from here (I want to return a generic message to the user).

  const chain = new ConversationChain({
    llm: chat,
    memory,
    prompt: chatPrompt
  });

  handlers.handleLLMError = async (e: Error, runId: string) => {
    // Handle response here
    const errorMessage = e.message;

    console.error(errorMessage);

    if (errorMessage.includes("Azure OpenAI's content management policy")) {
      // Do stuff in here
      return new StreamingTextResponse(new ReadableStream("Content filtering policy triggered"));
    }
    else {
      // Other handling in here
    }
  };

  chain.call({ input: lastHumanMessage.content }, [handlers]);

  return new StreamingTextResponse(stream);

Wrapping chain.call in a try-catch and explicitly throwing inside the handleLLMError function also doesn't seem to work as my catch block is never hit.

    const chain = new ConversationChain({
    llm: chat,
    memory,
    prompt: chatPrompt
  });

  handlers.handleLLMError = async (e: Error, runId: string) => {
    // Handle response here
    const errorMessage = e.message;

    console.error(errorMessage);

     // Also doesn't work
     // throw e;
    throw new error(errorMessage);
  };

  try {
    chain.call({ input: lastHumanMessage.content }, [handlers]);
  } catch (e) {
    console.error(e);
  }

  return new StreamingTextResponse(stream);
BeigeBadger commented 1 year ago

It looks like this is being tracked in the langchain-js repo. Issue here.