langchain4j / langchain4j

Java version of LangChain
https://docs.langchain4j.dev
Apache License 2.0
4.89k stars 971 forks source link

[FEATURE] For "Responsible AI" with Azure AI models, throw an exception instead of returning an error message #1822

Open jdubois opened 1 month ago

jdubois commented 1 month ago

Currently when we have a "Responsible AI" error (=violence, sexual content, etc), we return a message with the error:

https://github.com/langchain4j/langchain4j/blob/29299124489179aca7391209ec3b8ae0b9cce28e/langchain4j-azure-open-ai/src/main/java/dev/langchain4j/model/azure/AzureOpenAiChatModel.java#L318

There is feedback to throw an exception instead of returning an error message in the field where user would expect LLM generated text to be.

This was noted in #1807 for GitHub Models, which work the same way: let's discuss this in the general ticket, and then apply it to Azure OpenAI, GitHub Models, and maybe more.

langchain4j-github-bot[bot] commented 1 month ago

/cc @agoncal (azure)

langchain4j commented 1 month ago

Please note that this is related to general error-handling, not only for responsible AI case

langchain4j commented 1 month ago

Perharps we could add "refusal" field to AiMessage with details about refusal. Similar to what OpenAI did for Structured Outputs.