victordibia / llmx

An API for Chat Fine-Tuned Large Language Models (llm)
MIT License
72 stars 27 forks source link

Added support for http_client,extra_headers and extra_query in openai #23

Open daichi-m opened 3 months ago

daichi-m commented 3 months ago

This PR contains changes to support custom HTTP client, default headers and default queries in using OpenAITextGenerator.

Why is this change required

We want to pass custom headers to our internal proxy layer which is built on top of AzureOpenAI for our internal audit and metrics. Currently this is not supported on llmx and subsequently, not available via lida. This change attempts to add support for a custom http_client along with additional default_headers and default_query parameters.

How is this tested

We have done an internal testing with our proxy layer. Sample code:

headers = {"X-Custom-Header": "Custom-Val"}
client = httpx.Client(headers=headers)

llm_inst = llm(provider="openai",  api_type="azure", azure_endpoint="https://openaiproxy.prod.walmart.com",
               api_key=api_key, api_version="2024-02-01", model="gpt-35-turbo", http_client=client)
config = TextGenerationConfig(n=1, temperature=0.2, max_tokens=100)

msgs = [
    {"role": "system", "content": "You are a helpful assistant that can explain concepts clearly to a 6 year old child."},
    {"role": "user", "content": "What is  gravity?"}
]
response = llm_inst.generate(messages=msgs, config=config)
daichi-m commented 2 months ago

@victordibia Can this be reviewed and merged?

daichi-m commented 2 months ago

@victordibia Can we move forward with this?

prati04 commented 1 month ago

@victordibia Can we merge these changes?