This PR contains changes to support custom HTTP client, default headers and default queries in using OpenAITextGenerator.
Why is this change required
We want to pass custom headers to our internal proxy layer which is built on top of AzureOpenAI for our internal audit and metrics. Currently this is not supported on llmx and subsequently, not available via lida. This change attempts to add support for a custom http_client along with additional default_headers and default_query parameters.
How is this tested
We have done an internal testing with our proxy layer. Sample code:
headers = {"X-Custom-Header": "Custom-Val"}
client = httpx.Client(headers=headers)
llm_inst = llm(provider="openai", api_type="azure", azure_endpoint="https://openaiproxy.prod.walmart.com",
api_key=api_key, api_version="2024-02-01", model="gpt-35-turbo", http_client=client)
config = TextGenerationConfig(n=1, temperature=0.2, max_tokens=100)
msgs = [
{"role": "system", "content": "You are a helpful assistant that can explain concepts clearly to a 6 year old child."},
{"role": "user", "content": "What is gravity?"}
]
response = llm_inst.generate(messages=msgs, config=config)
This PR contains changes to support custom HTTP client, default headers and default queries in using OpenAITextGenerator.
Why is this change required
We want to pass custom headers to our internal proxy layer which is built on top of AzureOpenAI for our internal audit and metrics. Currently this is not supported on llmx and subsequently, not available via lida. This change attempts to add support for a custom http_client along with additional default_headers and default_query parameters.
How is this tested
We have done an internal testing with our proxy layer. Sample code: