yym68686 / uni-api

This is a project that unifies the management of LLM APIs. It can call multiple backend services through a unified API interface, convert them to the OpenAI format uniformly, and support load balancing. Currently supported backend services include: OpenAI, Anthropic, DeepBricks, OpenRouter, Gemini, Vertex, etc.
383 stars 46 forks source link

Event loop is closed #42

Closed shuimu5418 closed 2 weeks ago

shuimu5418 commented 2 weeks ago

环境:vercel 部分配置:

  - provider: oai1
    base_url: https://api.openai.com/v1/chat/completions
    api: sk-11111
    model:
      - gpt-4o-latest : gpt-4o

  - provider: different_oai2
    base_url: https://api.closeai.com/v1/chat/completions
    api: sk-22222
    model:
      - gpt-4o

api_keys:
  - api: sk-12345
    model:
      - oai1/*
      - different_oai2/*
    role: admin

操作:

  1. 某个后端报错,不可使用或抽风且正好被碰上。
  2. 再次请求,返回500,vercel日志记录event loop is closed。
  3. 继续请求,每请求两次,正常一次,500一次。
yym68686 commented 2 weeks ago

仔细看readme vercel部署部分。

shuimu5418 commented 2 weeks ago

大哥,你这一分钟关issue真是难评。

IMG_20241114_095510.jpg

IMG_20241114_095248.jpg

IMG_20241114_095216.jpg

IMG_20241114_095111.jpg

IMG_20241114_095055.jpg

后面那一大串200-500 用的是同一提供商的同一模型,且没速率问题。

yym68686 commented 2 weeks ago

你好。需要redeploy不然60秒无法生效。

shuimu5418 commented 2 weeks ago

你好,感谢回复。

我很早就redeploy了 IMG_20241114_100241.jpg 刚刚又redeploy一次 IMG_20241114_100553.jpg 结果 IMG_20241114_100645.jpg

yym68686 commented 2 weeks ago

粘贴一下500日志完整报错。

shuimu5418 commented 2 weeks ago

刚刚打开了debug,让你久等了。


POST

/v1/chat/completions

500
[ERROR] 2024-11-14T02:22:42.441Z    b3c18e45-a0c8-4faa-954b-db72b4868a0e    Error 500 with provider closeai API key: sk-22222: Event loop is closed
Traceback (most recent call last):
File "/var/task/main.py", line 1024, in request_model
response = await process_request(request, provider, endpoint)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/var/task/main.py", line 795, in process_request
raise e
File "/var/task/main.py", line 778, in process_request
wrapped_generator, first_response_time = await error_handling_wrapper(generator, channel_id)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/var/task/utils.py", line 421, in error_handling_wrapper
first_item = await generator.__anext__()
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/var/task/response.py", line 324, in fetch_response
response = await client.post(url, headers=headers, json=payload)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/var/task/httpx/_client.py", line 1905, in post
return await self.request(
^^^^^^^^^^^^^^^^^^^
File "/var/task/httpx/_client.py", line 1585, in request
return await self.send(request, auth=auth, follow_redirects=follow_redirects)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/var/task/httpx/_client.py", line 1674, in send
response = await self._send_handling_auth(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/var/task/httpx/_client.py", line 1702, in _send_handling_auth
response = await self._send_handling_redirects(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/var/task/httpx/_client.py", line 1739, in _send_handling_redirects
response = await self._send_single_request(request)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/var/task/httpx/_client.py", line 1776, in _send_single_request
response = await transport.handle_async_request(request)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/var/task/httpx/_transports/default.py", line 377, in handle_async_request
resp = await self._pool.handle_async_request(req)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/var/task/httpcore/_async/connection_pool.py", line 216, in handle_async_request
raise exc from None
File "/var/task/httpcore/_async/connection_pool.py", line 189, in handle_async_request
await self._close_connections(closing)
File "/var/task/httpcore/_async/connection_pool.py", line 305, in _close_connections
await connection.aclose()
File "/var/task/httpcore/_async/connection.py", line 171, in aclose
await self._connection.aclose()
File "/var/task/httpcore/_async/http2.py", line 424, in aclose
await self._network_stream.aclose()
File "/var/task/httpcore/_backends/anyio.py", line 55, in aclose
await self._stream.aclose()
File "/var/task/anyio/streams/tls.py", line 201, in aclose
await self.transport_stream.aclose()
File "/var/task/anyio/_backends/_asyncio.py", line 1287, in aclose
self._transport.close()
File "/var/lang/lib/python3.12/asyncio/selector_events.py", line 1210, in close
super().close()
File "/var/lang/lib/python3.12/asyncio/selector_events.py", line 875, in close
self._loop.call_soon(self._call_connection_lost, None)
File "/var/lang/lib/python3.12/asyncio/base_events.py", line 795, in call_soon
self._check_closed()
File "/var/lang/lib/python3.12/asyncio/base_events.py", line 541, in _check_closed
raise RuntimeError('Event loop is closed')
RuntimeError: Event loop is closed
yym68686 commented 2 weeks ago

不是这里。是vercel 500报错完整页面。我要确认60秒生效了。

shuimu5418 commented 2 weeks ago

IMG_20241114_103639.jpg

yym68686 commented 2 weeks ago

好吧。这种情况没遇到过。我需要自己排查一下。

yym68686 commented 2 weeks ago

你好,该问题已被修复。请重新部署。感谢帮忙发现问题。❤️

image

经过详细的排查,原因是 vercel 每次请求都在一个新的事件循环里面,但是 httpx 的连接池试图在不同的事件循环中重用同一个连接,这在 asyncio 中是不允许的。