Open BytesByJay opened 1 year ago
@DhananjayanOnline yes I think it'd be pretty easy to replace as the LlamaIndex framework does have an implementation of the generic LLM interface for Azure's OpenAI service - see LlamaIndex docs on how to set this up.
I think the main places where you'd need to make changes in the codebase are in
backend/app/chat/engine.py
. Specifically, here & here.
@sourabhdesai I've made the changes in the code as per your previous suggestion, but I'm encountering a response that says, 'Sorry, I either couldn't comprehend your question or I don't have an answer for it.' It appears that the engine is returning an empty response.
Experiencing same issue and same behavior , when switching to AzureOpenAI , when i see the code , i see that verification for support for function call is here , but not sure why it's happening
@DhananjayanOnline yes I think it'd be pretty easy to replace as the LlamaIndex framework does have an implementation of the generic LLM interface for Azure's OpenAI service - see LlamaIndex docs on how to set this up. I think the main places where you'd need to make changes in the codebase are in
backend/app/chat/engine.py
. Specifically, here & here.@sourabhdesai I've made the changes in the code as per your previous suggestion, but I'm encountering a response that says, 'Sorry, I either couldn't comprehend your question or I don't have an answer for it.' It appears that the engine is returning an empty response.
@sourabhdesai
I am currently facing an error when using the AzureOpenAI library, The error message I am receiving is as follows:
Traceback (most recent call last):
File "/home/jay/.cache/pypoetry/virtualenvs/llama-app-backend-D3oLmLlb-py3.11/lib/python3.11/site-packages/tenacity/__init__.py", line 382, in __call__
result = fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/home/jay/.cache/pypoetry/virtualenvs/llama-app-backend-D3oLmLlb-py3.11/lib/python3.11/site-packages/llama_index/embeddings/openai.py", line 166, in get_embeddings
data = openai.Embedding.create(input=list_of_text, model=engine, **kwargs).data
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/jay/.cache/pypoetry/virtualenvs/llama-app-backend-D3oLmLlb-py3.11/lib/python3.11/site-packages/openai/api_resources/embedding.py", line 33, in create
response = super().create(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/jay/.cache/pypoetry/virtualenvs/llama-app-backend-D3oLmLlb-py3.11/lib/python3.11/site-packages/openai/api_resources/abstract/engine_api_resource.py", line 151, in create
) = cls.__prepare_create_request(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/jay/.cache/pypoetry/virtualenvs/llama-app-backend-D3oLmLlb-py3.11/lib/python3.11/site-packages/openai/api_resources/abstract/engine_api_resource.py", line 85, in __prepare_create_request
raise error.InvalidRequestError(
openai.error.InvalidRequestError: Must provide an 'engine' or 'deployment_id' parameter to create a <class 'openai.api_resources.embedding.Embedding'>
I've made the necessary changes in the code as per your previous suggestion. If you have any ideas on how to fix this issue or potential workarounds, feel free to mention them here
I am also facing the same issue with empty responses when using the AzureOpenAI
class. I've replaced both the llm and embedding_model classes, and I get the same behavior that @DhananjayanOnline describes.
I've confirmed that my parameters are correct, as I have valid embeddings being generated and I can get valid responses by calling chat_llm.complete()
. I'm wondering if it is behavior specific to AzureOpenAI + async + streaming = True?
Has anyone had success with AzureOpenAI so far?
@DhananjayanOnline yes I think it'd be pretty easy to replace as the LlamaIndex framework does have an implementation of the generic LLM interface for Azure's OpenAI service - see LlamaIndex docs on how to set this up.
I think the main places where you'd need to make changes in the codebase are in
backend/app/chat/engine.py
. Specifically, here & here.