microsoft / autogen

A programming framework for agentic AI. Discord: https://aka.ms/autogen-dc. Roadmap: https://aka.ms/autogen-roadmap
https://microsoft.github.io/autogen/
Creative Commons Attribution 4.0 International
27.91k stars 4.07k forks source link

[Bug]: Autogen Studio App does not use the given baseurl #2903

Open JaloWalt opened 2 weeks ago

JaloWalt commented 2 weeks ago

Describe the bug

For example with fireworks.ai model: FireFunction V1

Ref: https://fireworks.ai/models/fireworks/firefunction-v1

The base url is informed as: https://api.fireworks.ai/inference/v1/chat/completions

That would then result to the folllowing settings in autogen studio model specification:

Model name: accounts/fireworks/models/firefunction-v1

Api Key: your-fireworks-ai-secret-api-key

Base url: https://api.fireworks.ai/inference/v1/chat/completions

Api Type: empty

Api version: empty

Description: fireworks firefunction-v1

When testing this setup with "test-model" button the result is: => Error occurred while testing model: Error code: 404 - {'detail': 'Not Found'}

The log shows: File "\site-packages\openai_base_client.py", line 1020, in _request raise self._make_status_error_from_response(err.response) from None openai.NotFoundError: Error code: 404 - {'detail': 'Not Found'}

However when the [Base url] is changed to: https://api.fireworks.ai/inference/v1

Then the result (with the same test) is: Model tested successfully

Therefore the conclusion is: Autogen Studio App does not respect the base url setting given by api-inference providers.

This is a serious BUG.

Steps to reproduce

For example with fireworks.ai model: FireFunction V1

Ref: https://fireworks.ai/models/fireworks/firefunction-v1

The base url is informed as: https://api.fireworks.ai/inference/v1/chat/completions

That would then result to the folllowing settings in autogen studio model specification:

Model name: accounts/fireworks/models/firefunction-v1

Api Key: your-fireworks-ai-secret-api-key

Base url: https://api.fireworks.ai/inference/v1/chat/completions

Api Type: empty

Api version: empty

Description: fireworks firefunction-v1

When testing this setup with "test-model" button the result is: => Error occurred while testing model: Error code: 404 - {'detail': 'Not Found'}

The log shows: File "\site-packages\openai_base_client.py", line 1020, in _request raise self._make_status_error_from_response(err.response) from None openai.NotFoundError: Error code: 404 - {'detail': 'Not Found'}

However when the [Base url] is changed to: https://api.fireworks.ai/inference/v1

Then the result (with the same test) is: Model tested successfully

Therefore the conclusion is: Autogen Studio App does not respect the base url setting given by api-inference providers.

This is a serious BUG.

Model Used

accounts/fireworks/models/firefunction-v1

Expected Behavior

The folllowing settings in autogen studio model specification should have worked:

Model name: accounts/fireworks/models/firefunction-v1

Api Key: your-fireworks-ai-secret-api-key

Base url: https://api.fireworks.ai/inference/v1/chat/completions

Api Type: empty

Api version: empty

Description: fireworks firefunction-v1

When testing this setup with "test-model" button the result should have been: => Model tested successfully

Screenshots and logs

File "\site-packages\openai_base_client.py", line 1020, in _request raise self._make_status_error_from_response(err.response) from None openai.NotFoundError: Error code: 404 - {'detail': 'Not Found'}

Additional Information

Autogen Studio version: v0.0.56 puautogen version: pyautogen-0.2.27-py3-none-any.whl (as informed in build log) Python version: Python 3.12.3

JaloWalt commented 2 weeks ago

When testing this issue with the newest autogen version: 0.0.56rc12

the behavior was as follows:

Creating a new model: it saved three times each new created model, but nothing new became visible in Models view.

After that any new models created did come visible in Models view.

In creating new models (after the first three [see above]) a new selection was presented: see attached picture; pre-choosing-model-type-2024-06-11_120029.jpg pre-choosing-model-type-2024-06-11_120029

Having a pre-choosing of model type between: OpenAI, Azure OpenAI and Gemini

However none of those worked (tested with the test-button) with the actual fireworks.ai base-url: https://api.fireworks.ai/inference/v1/chat/completions

Therefore, the informed bug still remains.

charltonh commented 5 days ago

I had a similar bug open: https://github.com/microsoft/autogen/issues/2889

Could be more of a documentation/usability issue as both these env variables need to be set:

export OPENAI_API_BASE="http://192.168.1.64:5000/v1"
export OPENAI_BASE_URL="http://192.168.1.64:5000/v1"
JaloWalt commented 5 days ago

If these need to be set as environment variables:

OPENAI_API_BASE="http://192.168.1.64:5000/v1" OPENAI_BASE_URL="http://192.168.1.64:5000/v1"

then (A) (where does it come from?) /what is this: 192.168.1.64

Asking because it seems to be some internal address?

then (B) what is this: 5000

Surely not some random port address?

In summary: I am trying to understand the connection of those (A) and (B) to some real system dependent item.