openai / openai-python

The official Python library for the OpenAI API
https://pypi.org/project/openai/
Apache License 2.0
21.26k stars 2.9k forks source link

Why is logprobs and log_probs not permitted in client.chat.completions.create for AzureOpenAI? #1468

Open r-pathak opened 1 month ago

r-pathak commented 1 month ago

Confirm this is an issue with the Python library and not an underlying OpenAI API

Describe the bug

I'm using AzureOpenAI on API version '2024-02-01'.

When using client.chat.completions.create, to perform RAG with an Azure OpenAI gpt-4-1106-preview deployment against an Azure AI search index (via the extra_body -> data_sources parameter), I keep receiving the error: TypeError: create() got an unexpected keyword argument 'log_probs' or Error code: 400 - {'error': {'requestid': '5b8df334-1238-4b66-8cb8-564ffbe02cff', 'code': 400, 'message': 'Validation error at #/logprobs: Extra inputs are not permitted'}}

I read here that I should be able to pass 'log_probs' in the completions method to see logprobs populated in the response: https://learn.microsoft.com/en-us/azure/ai-services/openai/reference#chat-completions

Yet it seems despite switching API versions I simply can't achieve this. Is there support for this, or am I doing something wrong?

To Reproduce

  1. Use client.chat.completions.create with gpt-4 hosted on Azure
  2. Attempt to use the logprobs or log_probs parameter

Code snippets

No response

OS

macOs

Python version

Python v3.9

Library version

openai 1.23.2

wangyuantao commented 1 month ago

RAG has feature gap. Why do you want log_probs for RAG? Model distillation? Get answer confident score?

r-pathak commented 1 month ago

Hi,

Thanks for the response, I thought that might be the case.

You said it - we want to extract confidence scores from each of our answers.

Is there another best practice way for now? Ideally, one that doesn't involve another LLM request - as we want to minimise cost & speed.

r-pathak commented 1 month ago

I've seen on the link below that there is a workaround in typescript to include logprobs in the parameters - is there a similar workaround for the Python SDK?

https://github.com/Azure/azure-sdk-for-js/issues/29199

kristapratico commented 1 month ago

@r-pathak there is no workaround in the client library -- it's the Azure OpenAI service itself that does not support logprobs with on your data.

Not a confidence score, but I think the closest thing that's currently available is the strictness parameter, see this comment.

gabrielchua commented 5 days ago

logprobs and top_logprobs seem to now be supported in the 2024-06-01 api version

Source: https://learn.microsoft.com/en-us/azure/ai-services/openai/whats-new#july-2024