Open steven-solomon opened 6 months ago
Thanks for sharing this issue! Just to be sure I understand correctly, you want to implement a way to handle most finish_reason
as a way for the user to understand why the output was empty/truncated/missing, right?
Sounds great! Additional information during inference would be more than welcome considering users have struggled with missing output in the past.
Handle all four CompletionChoice.finish_reasons Open question: How do you want to handle the length and function_call and null finish_reasons
I don't think at the moment we need to do anything with function_call
since BERTopic does not make use of it and the models generally follow instructions quite well.
I believe length
can be logged similar to content_filter
, since both do something with the output. Here, we can mention to use the truncation options available in BERTopic to prevent these issues. With length
, we might need to add "incomplete output due to..." or something similar to the label if it happens to be truncated. If it is empty, we can either leave it empty and log it or create a label that says something like "incomple output due to...".
It seems that null
should only happen when you try to access the API when it is already running which should generally not happen unless the user was already running some process right? Having said that, logging it like length
and content_filter
seems like a possible solution.
Log repr_doc_ids which caused sensitive responses
Agreed, and since BERTopic does not pass all documents to the API I would not expect excessive logging.
All in all, sounds good. Looking forward to this!
Thanks for sharing this issue! Just to be sure I understand correctly, you want to implement a way to handle most finish_reason as a way for the user to understand why the output was empty/truncated/missing, right?
Yes, that is correct.
@MaartenGr, thanks for your feedback
My plan for finish_reason
modifications will be as follows:
stop
in the successful caserepr_doc_ids
and finish reason for content_filter
and length
null
and function_call
casesI'll have a draft shortly to collect your feedback.
On the code design front, I am tempted to extract a function so I can add some unit tests around this handling logic. What are your thoughts on that?
High Level Proposal
I would like to extend
representation.OpenAI
to handle more scenarios that crop up when using OpenAI hosted by Azure. These scenarios affect usage of the AzureOpenAI class. Azure places its OpenAI instance behind Responsible AI (RAI) controls which reduce harm by limiting access to sensitive (violence, hate, self-harm, sexual) content but requires more edge-cases and errors to be accounted for.I believe that handling more RAI based scenarios for OpenAI will improve the usability of modeling using that representation model for both folks who use AzureOpenAI and OpenAI.
Technical Details
RAI triggers can be encountered in two scenarios:
Sending inappropriate input
repr_doc_ids
within _extract_representative_docs which cause an exception to be raisedGPT generating sensitive output based on the prompt
CompletionChoice.finish_reason
slength
andfunction_call
andnull
finish_reasonsstop
finish_reasonsrepr_doc_ids
which caused sensitive responses