Closed cmungall closed 2 weeks ago
Here's how I tested this: I added this to my extra-openai-models.yaml
file:
- model_id: o1-via-proxy
model_name: o1-preview
api_base: "http://localhost:8040/v1"
api_key_name: openai
can_stream: false
Then I ran a proxy on port 8040 like this:
uv run --with asgi-proxy-lib==0.2a0 \
python -m asgi_proxy \
https://api.openai.com -p 8040 -v
And tested it like this:
llm -m o1-via-proxy 'just say hi'
Output:
Hi there! How can I assist you today?
While my proxy server showed:
INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 200 OK"
INFO:root:Request: POST https://api.openai.com/v1/chat/completions
INFO:root:Response: 200 OK
INFO: 127.0.0.1:52683 - "POST /v1/chat/completions HTTP/1.1" 200 OK
I had to fix this issue first though:
o1 support was added in response to
570
However, this hardwires the streamability to named o1 models. I am accessing o1-preview via a (litellm) proxy, so I get a
'message': 'litellm.BadRequestError: AzureException BadRequestError - Error code: 400 - {\'error\': {\'message\': "Unsupported value: \'stream\' does not support true with this model. Only the default (false) value is supported.", \'type\': \'invalid_request_error\'
I believe I need to be able to do this
however,
can_stream
is currently ignored