Open haesleinhuepf opened 2 hours ago
git-bob think about how you would do this.
This message was generated by git-bob (version: 0.2.7, model: github_models:gpt-4o), an experimental AI-based assistant. It can make mistakes and has limitations. Check its messages carefully.
To address the issue, we can modify the existing prompt_azure
function in _endpoints.py
and update requirements.txt
to include the azure-ai-inference
dependency.
In src/git_bob/_endpoints.py
:
def prompt_azure(message: str, model="gpt-4o"):
"""A prompt helper function that sends a message to Azure's OpenAI or other LLM
and returns only the text response.
"""
import os
# New Azure-based client
if "gpt" not in model:
from azure.ai.inference import ChatCompletionsClient
from azure.ai.inference.models import SystemMessage, UserMessage
from azure.core.credentials import AzureKeyCredential
endpoint = "https://models.inference.ai.azure.com"
token = os.environ["GITHUB_TOKEN"]
client = ChatCompletionsClient(
endpoint=endpoint,
credential=AzureKeyCredential(token),
)
if isinstance(message, str):
message = [UserMessage(content=message)]
response = client.complete(
messages=[
SystemMessage(content="You are a helpful assistant."),
*message,
],
temperature=1.0,
top_p=1.0,
max_tokens=1000,
model=model
)
return response.choices[0].message.content
else:
from openai import OpenAI
model = model.replace("github_models:", "")
if isinstance(message, str):
message = [{"role": "user", "content": message}]
token = os.environ["GH_MODELS_API_KEY"]
endpoint = "https://models.inference.ai.azure.com"
client = OpenAI(
base_url=endpoint,
api_key=token,
)
response = client.chat.completions.create(
model=model,
messages=message,
temperature=1.0,
top_p=1.0,
max_tokens=1000
)
return response.choices[0].message.content
In requirements.txt
:
anthropic
openai
google-generativeai
pygithub
toolz
azure-ai-inference
This should provide support for more Azure-based models as specified.
Let's shorten this code and keep token = os.environ["GH_MODELS_API_KEY"]
and also endpoint = ...
outside the if-else block as it should be the same for both cases. Also the return statement is the same in both cases and we can remove the code duplication here.
git-bob think about this
This message was generated by git-bob (version: 0.2.7, model: github_models:gpt-4o), an experimental AI-based assistant. It can make mistakes and has limitations. Check its messages carefully.
To address the issue and improve upon the previous suggestions, we can refactor the prompt_azure
function to minimize code duplication and handle more Azure-based models. Here’s the proposed change:
In src/git_bob/_endpoints.py
:
def prompt_azure(message: str, model="gpt-4o"):
"""A prompt helper function that sends a message to Azure's OpenAI or other LLM
and returns only the text response.
"""
import os
token = os.environ["GH_MODELS_API_KEY"]
endpoint = "https://models.inference.ai.azure.com"
if "gpt" not in model:
from azure.ai.inference import ChatCompletionsClient
from azure.ai.inference.models import SystemMessage, UserMessage
from azure.core.credentials import AzureKeyCredential
client = ChatCompletionsClient(
endpoint=endpoint,
credential=AzureKeyCredential(token),
)
if isinstance(message, str):
message = [UserMessage(content=message)]
response = client.complete(
messages=[
SystemMessage(content="You are a helpful assistant."),
*message,
],
temperature=1.0,
top_p=1.0,
max_tokens=1000,
model=model
)
else:
from openai import OpenAI
model = model.replace("github_models:", "")
if isinstance(message, str):
message = [{"role": "user", "content": message}]
client = OpenAI(
base_url=endpoint,
api_key=token,
)
response = client.chat.completions.create(
model=model,
messages=message,
temperature=1.0,
top_p=1.0,
max_tokens=1000
)
return response.choices[0].message.content
In requirements.txt
:
anthropic
openai
google-generativeai
pygithub
toolz
azure-ai-inference
Also the line model = model.replace("github_models:", "")
should be before the if...
git-bob implement it like this!
In _endpoints.py, there is a function that works perfectly well for the gpt-4o model. However, for other models, we need to modify the code. In case the model name does not contain "gpt", we should do approximately the following:
Furthermore, we need to add
azure-ai-inference
to the dependencies of the project.