open-compass / VLMEvalKit

Open-source evaluation toolkit of large vision-language models (LVLMs), support 160+ VLMs, 50+ benchmarks
https://huggingface.co/spaces/opencompass/open_vlm_leaderboard
Apache License 2.0
1.34k stars 188 forks source link

openai.InternalServerError: Bad Gateway #419

Closed serser closed 2 months ago

serser commented 2 months ago

I've set up a judge with

lmdeploy serve api_server internlm/internlm2-chat-1_8b --server-port 23333 --model-name internlm2-chat-1_8b

And now when I try with evaluation

python run.py --data MMMU_DEV_VAL --model InternVL2-2B --verbose

It gives me openai.InternalServerError: Bad Gateway like the following,

Traceback (most recent call last):
  File "run.py", line 18, in <module>
    model_name = client.models.list().data[0].id
  File "/workdir/my_conda_env/lib/python3.8/site-packages/openai/resources/models.py", line 80, in list
    return self._get_api_list(
  File "/workdir/my_conda_env/lib/python3.8/site-packages/openai/_base_client.py", line 1310, in get_api_list
    return self._request_api_list(model, page, opts)
  File "/workdir/my_conda_env/lib/python3.8/site-packages/openai/_base_client.py", line 1161, in _request_api_list
    return self.request(page, options, stream=False)
  File "/workdir/my_conda_env/lib/python3.8/site-packages/openai/_base_client.py", line 937, in request
    return self._request(
  File "/workdir/my_conda_env/lib/python3.8/site-packages/openai/_base_client.py", line 1027, in _request
    return self._retry_request(
  File "/workdir/my_conda_env/lib/python3.8/site-packages/openai/_base_client.py", line 1076, in _retry_request
    return self._request(
  File "/workdir/my_conda_env/lib/python3.8/site-packages/openai/_base_client.py", line 1027, in _request
    return self._retry_request(
  File "/workdir/my_conda_env/lib/python3.8/site-packages/openai/_base_client.py", line 1076, in _retry_request
    return self._request(
  File "/workdir/my_conda_env/lib/python3.8/site-packages/openai/_base_client.py", line 1042, in _request
    raise self._make_status_error_from_response(err.response) from None
openai.InternalServerError: Bad Gateway

I've tried with curl and the judge is all good to respond. What could be the problem here for the bad gateway?

serser commented 2 months ago

By they way, if I use local LLM judge, do I need to set OPENAI_API_KEY like this?

lmdeploy serve api_server internlm/internlm2-chat-1_8b --server-port 23333 --model-name internlm2-chat-1_8b --api-keys sk-123456
serser commented 2 months ago

It is probably due the firewall on the port. I switched to another one and explicitly specified server name with 0.0.0.0 and api-keys with suggested ones. And it now calls the local judge server on each output.