MaartenGr / BERTopic

Leveraging BERT and c-TF-IDF to create easily interpretable topics.
https://maartengr.github.io/BERTopic/
MIT License
5.78k stars 718 forks source link

Handle Responsible AI scenarios for OpenAI #1979

Open steven-solomon opened 1 month ago

steven-solomon commented 1 month ago

High Level Proposal

I would like to extend representation.OpenAI to handle more scenarios that crop up when using OpenAI hosted by Azure. These scenarios affect usage of the AzureOpenAI class. Azure places its OpenAI instance behind Responsible AI (RAI) controls which reduce harm by limiting access to sensitive (violence, hate, self-harm, sexual) content but requires more edge-cases and errors to be accounted for.

I believe that handling more RAI based scenarios for OpenAI will improve the usability of modeling using that representation model for both folks who use AzureOpenAI and OpenAI.

Technical Details

RAI triggers can be encountered in two scenarios:

  1. Sending inappropriate input
  2. GPT generating sensitive output based on the prompt

Sending inappropriate input

GPT generating sensitive output based on the prompt

MaartenGr commented 1 month ago

Thanks for sharing this issue! Just to be sure I understand correctly, you want to implement a way to handle most finish_reason as a way for the user to understand why the output was empty/truncated/missing, right?

Sounds great! Additional information during inference would be more than welcome considering users have struggled with missing output in the past.

Handle all four CompletionChoice.finish_reasons Open question: How do you want to handle the length and function_call and null finish_reasons

I don't think at the moment we need to do anything with function_call since BERTopic does not make use of it and the models generally follow instructions quite well.

I believe length can be logged similar to content_filter, since both do something with the output. Here, we can mention to use the truncation options available in BERTopic to prevent these issues. With length, we might need to add "incomplete output due to..." or something similar to the label if it happens to be truncated. If it is empty, we can either leave it empty and log it or create a label that says something like "incomple output due to...".

It seems that null should only happen when you try to access the API when it is already running which should generally not happen unless the user was already running some process right? Having said that, logging it like length and content_filter seems like a possible solution.

Log repr_doc_ids which caused sensitive responses

Agreed, and since BERTopic does not pass all documents to the API I would not expect excessive logging.

All in all, sounds good. Looking forward to this!

steven-solomon commented 1 month ago

Thanks for sharing this issue! Just to be sure I understand correctly, you want to implement a way to handle most finish_reason as a way for the user to understand why the output was empty/truncated/missing, right?

Yes, that is correct.

@MaartenGr, thanks for your feedback

My plan for finish_reason modifications will be as follows:

I'll have a draft shortly to collect your feedback.

On the code design front, I am tempted to extract a function so I can add some unit tests around this handling logic. What are your thoughts on that?