fgenie / rims_minimal

시작이 절반이고 마무리 또한 절반이다.
0 stars 1 forks source link

어떻게 하면 response 받을 수 있나요? 포트? #38

Closed fgenie closed 7 months ago

fgenie commented 7 months ago

서버 구동까지는 되고 request까지는 가는 것 같습니다. 아래와 같은 상황에서 call했을 때 제대로 response가 안옵니다.

cat run_server.sh

CUDA_VISIBLE_DEVICES=6,7 python -m vllm.entrypoints.openai.api_server \
    --model "/home/user/checkpoints/deepseek-math-7b-instruct" \
    --port 8000 \
    --tensor-parallel-size 2

cat request.sh

curl http://localhost:8000/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"model": "/home/user/checkpoints/Mistral-7B-Instruct-v0.1",
"messages": [
    {"role": "system", "content": "You are a helpful assistant."},
    {"role": "user", "content": "Who won the world series in 2020?"}
]
}'

bash request.sh

server running ok

...
INFO 02-11 07:33:21 api_server.py:113] ' }}{% endif %}{% endfor %}{% if add_generation_prompt %}{{ 'Assistant:' }}{% endif %}
INFO:     Started server process [810914]
INFO:     Waiting for application startup.
INFO:     Application startup complete.
INFO:     Uvicorn running on http://0.0.0.0:8000 (Press CTRL+C to quit)

server log

INFO:     127.0.0.1:53404 - "POST /v1/chat/completions HTTP/1.1" 404 Not Found

response

{"object":"error","message":"The model `/home/user/checkpoints/Mistral-7B-Instruct-v0.1` does not exist.","type":"invalid_request_error","param":null,"code":null}

cat client.py

from openai import OpenAI
# Set OpenAI's API key and API base to use vLLM's API server.
openai_api_key = "EMPTY"
openai_api_base = "http://localhost:8000/v1"

client = OpenAI(
    api_key=openai_api_key,
    base_url=openai_api_base,
)

chat_response = client.chat.completions.create(
    model="/home/user/checkpoints/Mistral-7B-Instruct-v0.1",
    messages=[
        {"role": "user", "content": "Tell me a joke."},
    ]
)
print("Chat response:", chat_response)

python client.py

server log

INFO:     127.0.0.1:36250 - "POST /v1/chat/completions HTTP/1.1" 404 Not Found

client log

Traceback (most recent call last):
  File "/home/user/rims_server/client.py", line 11, in <module>
    chat_response = client.chat.completions.create(
  File "/home/user/miniconda3/envs/taehyeong/lib/python3.9/site-packages/openai/_utils/_utils.py", line 271, in wrapper
    return func(*args, **kwargs)
  File "/home/user/miniconda3/envs/taehyeong/lib/python3.9/site-packages/openai/resources/chat/completions.py", line 659, in create
    return self._post(
  File "/home/user/miniconda3/envs/taehyeong/lib/python3.9/site-packages/openai/_base_client.py", line 1200, in post
    return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls))
  File "/home/user/miniconda3/envs/taehyeong/lib/python3.9/site-packages/openai/_base_client.py", line 889, in request
    return self._request(
  File "/home/user/miniconda3/envs/taehyeong/lib/python3.9/site-packages/openai/_base_client.py", line 980, in _request
    raise self._make_status_error_from_response(err.response) from None
openai.NotFoundError: Error code: 404 - {'object': 'error', 'message': 'The model `/home/user/checkpoints/Mistral-7B-Instruct-v0.1` does not exist.', 'type': 'invalid_request_error', 'param': None, 'code': None}
fgenie commented 7 months ago

deep seek 를 띄우고 mistral을 request 중이었음. 바꿔보니 제대로 동작. 해결!

Chat response: ChatCompletion(id='cmpl-3b982c941f334aefb28b70a4607c7893', choices=[Choice(finish_reason='stop', index=0, logprobs=None, message=ChatCompletionMessage(content=" Ah, humor! A delightful aspect of human interaction. Here's a little cosmic jest for you:\n\nWhy don't physicists tell jokes about black holes?\n\nBecause they're afraid their punchlines might get sucked in!", role='assistant', function_call=None, tool_calls=None))], created=29395716, model='/home/user/checkpoints/Mistral-7B-Instruct-v0.1', object='chat.completion', system_fingerprint=None, usage=CompletionUsage(completion_tokens=54, prompt_tokens=14, total_tokens=68))