sgl-project / sglang

SGLang is a fast serving framework for large language models and vision language models.
https://sgl-project.github.io/
Apache License 2.0
5.96k stars 491 forks source link

[Bug] Unable to see logprobs for prompt/input #1365

Closed dmakhervaks closed 1 month ago

dmakhervaks commented 2 months ago

Checklist

Describe the bug

I know that there is a bug and MR already opened for this, however I cannot seem to figure out how to actually get back the logprobs for the entire input+output (just like how openai completions API does it).

https://github.com/sgl-project/sglang/blob/69b3bb9ae1c504925455e8b258eefa0fcc15bd81/python/sglang/srt/openai_api/adapter.py#L496

^ From my understanding this was your solution, however I am not sure how to dpecify in my request that I want to get the logprobs for all the tokens, not just the newly generated ones.

Reproduction

curl http://10.20.45.24:9942/v1/completions \ -H "Content-Type: application/json" \ -H "Authorization: Bearer" \ -d '{ "model": "share5/projects/xlab/ZLM/models/Reflection-Llama-3.1-70B", "prompt": "Say this is a test", "max_tokens": 7, "temperature": 0, "logprobs": 1 }

Environment

Python: 3.10.14 (main, Apr 6 2024, 18:45:05) [GCC 9.4.0] CUDA available: True GPU 0,1,2,3,4,5,6,7: NVIDIA H100 80GB HBM3 GPU 0,1,2,3,4,5,6,7 Compute Capability: 9.0 CUDA_HOME: /usr/local/cuda NVCC: Cuda compilation tools, release 12.1, V12.1.105 CUDA Driver Version: 550.90.07 PyTorch: 2.4.0+cu121 flashinfer: 0.1.6+cu121torch2.4 triton: 3.0.0 transformers: 4.44.2 requests: 2.32.3 tqdm: 4.66.5 numpy: 1.26.4 aiohttp: 3.10.5 fastapi: 0.112.2 hf_transfer: 0.1.8 huggingface_hub: 0.24.6 interegular: 0.3.3 packaging: 24.1 PIL: 10.4.0 psutil: 6.0.0 pydantic: 2.8.2 uvicorn: 0.30.6 uvloop: 0.20.0 zmq: 26.2.0 vllm: 0.5.5 multipart: 0.0.9 openai: 1.43.0 anthropic: 0.34.1 NVIDIA Topology: GPU0 GPU1 GPU2 GPU3 GPU4 GPU5 GPU6 GPU7 NIC0 NIC1 NIC2 NIC3 NIC4 NIC5 NIC6 NIC7 NIC8 NIC9 NIC10 NIC11 NIC12 NIC13 NIC14 NIC15 NIC16 NIC17 CPU Affinity NUMA Affinity GPU NUMA ID GPU0 X NV18 NV18 NV18 NV18 NV18 NV18 NV18 PIX NODE NODE NODE SYS SYS SYS SYS SYS SYS PIX NODE NODE NODE SYS SYS SYS SYS 0,2,4,6,8,10 0 N/A GPU1 NV18 X NV18 NV18 NV18 NV18 NV18 NV18 NODE PIX NODE NODE SYS SYS SYS SYS SYS SYS NODE PIX NODE NODE SYS SYS SYS SYS 0,2,4,6,8,10 0 N/A GPU2 NV18 NV18 X NV18 NV18 NV18 NV18 NV18 NODE NODE PIX NODE SYS SYS SYS SYS SYS SYS NODE NODE PIX NODE SYS SYS SYS SYS 0,2,4,6,8,10 0 N/A GPU3 NV18 NV18 NV18 X NV18 NV18 NV18 NV18 NODE NODE NODE PIX SYS SYS SYS SYS SYS SYS NODE NODE NODE PIX SYS SYS SYS SYS 0,2,4,6,8,10 0 N/A GPU4 NV18 NV18 NV18 NV18 X NV18 NV18 NV18 SYS SYS SYS SYS PIX PIX PIX NODE NODE NODE SYS SYS SYS SYS PIX NODE NODE NODE 1,3,5,7,9,11 1 N/A GPU5 NV18 NV18 NV18 NV18 NV18 X NV18 NV18 SYS SYS SYS SYS NODE NODE NODE PIX NODE NODE SYS SYS SYS SYS NODE PIX NODE NODE 1,3,5,7,9,11 1 N/A GPU6 NV18 NV18 NV18 NV18 NV18 NV18 X NV18 SYS SYS SYS SYS NODE NODE NODE NODE PIX NODE SYS SYS SYS SYS NODE NODE PIX NODE 1,3,5,7,9,11 1 N/A GPU7 NV18 NV18 NV18 NV18 NV18 NV18 NV18 X SYS SYS SYS SYS NODE NODE NODE NODE NODE PIX SYS SYS SYS SYS NODE NODE NODE PIX 1,3,5,7,9,11 1 N/A NIC0 PIX NODE NODE NODE SYS SYS SYS SYS X NODE NODE NODE SYS SYS SYS SYS SYS SYS PIX NODE NODE NODE SYS SYS SYS SYS NIC1 NODE PIX NODE NODE SYS SYS SYS SYS NODE X NODE NODE SYS SYS SYS SYS SYS SYS NODE PIX NODE NODE SYS SYS SYS SYS NIC2 NODE NODE PIX NODE SYS SYS SYS SYS NODE NODE X NODE SYS SYS SYS SYS SYS SYS NODE NODE PIX NODE SYS SYS SYS SYS NIC3 NODE NODE NODE PIX SYS SYS SYS SYS NODE NODE NODE X SYS SYS SYS SYS SYS SYS NODE NODE NODE PIX SYS SYS SYS SYS NIC4 SYS SYS SYS SYS PIX NODE NODE NODE SYS SYS SYS SYS X PIX PIX NODE NODE NODE SYS SYS SYS SYS PIX NODE NODE NODE NIC5 SYS SYS SYS SYS PIX NODE NODE NODE SYS SYS SYS SYS PIX X PIX NODE NODE NODE SYS SYS SYS SYS PIX NODE NODE NODE NIC6 SYS SYS SYS SYS PIX NODE NODE NODE SYS SYS SYS SYS PIX PIX X NODE NODE NODE SYS SYS SYS SYS PIX NODE NODE NODE NIC7 SYS SYS SYS SYS NODE PIX NODE NODE SYS SYS SYS SYS NODE NODE NODE X NODE NODE SYS SYS SYS SYS NODE PIX NODE NODE NIC8 SYS SYS SYS SYS NODE NODE PIX NODE SYS SYS SYS SYS NODE NODE NODE NODE X NODE SYS SYS SYS SYS NODE NODE PIX NODE NIC9 SYS SYS SYS SYS NODE NODE NODE PIX SYS SYS SYS SYS NODE NODE NODE NODE NODE X SYS SYS SYS SYS NODE NODE NODE PIX NIC10 PIX NODE NODE NODE SYS SYS SYS SYS PIX NODE NODE NODE SYS SYS SYS SYS SYS SYS X NODE NODE NODE SYS SYS SYS SYS NIC11 NODE PIX NODE NODE SYS SYS SYS SYS NODE PIX NODE NODE SYS SYS SYS SYS SYS SYS NODE X NODE NODE SYS SYS SYS SYS NIC12 NODE NODE PIX NODE SYS SYS SYS SYS NODE NODE PIX NODE SYS SYS SYS SYS SYS SYS NODE NODE X NODE SYS SYS SYS SYS NIC13 NODE NODE NODE PIX SYS SYS SYS SYS NODE NODE NODE PIX SYS SYS SYS SYS SYS SYS NODE NODE NODE X SYS SYS SYS SYS NIC14 SYS SYS SYS SYS PIX NODE NODE NODE SYS SYS SYS SYS PIX PIX PIX NODE NODE NODE SYS SYS SYS SYS X NODE NODE NODE NIC15 SYS SYS SYS SYS NODE PIX NODE NODE SYS SYS SYS SYS NODE NODE NODE PIX NODE NODE SYS SYS SYS SYS NODE X NODE NODE NIC16 SYS SYS SYS SYS NODE NODE PIX NODE SYS SYS SYS SYS NODE NODE NODE NODE PIX NODE SYS SYS SYS SYS NODE NODE X NODE NIC17 SYS SYS SYS SYS NODE NODE NODE PIX SYS SYS SYS SYS NODE NODE NODE NODE NODE PIX SYS SYS SYS SYS NODE NODE NODE X

Legend:

X = Self SYS = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI) NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node PHB = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU) PXB = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge) PIX = Connection traversing at most a single PCIe bridge NV# = Connection traversing a bonded set of # NVLinks

NIC Legend:

NIC0: mlx5_0 NIC1: mlx5_1 NIC2: mlx5_2 NIC3: mlx5_3 NIC4: mlx5_4 NIC5: mlx5_5 NIC6: mlx5_6 NIC7: mlx5_7 NIC8: mlx5_8 NIC9: mlx5_9 NIC10: mlx5_10 NIC11: mlx5_11 NIC12: mlx5_12 NIC13: mlx5_13 NIC14: mlx5_14 NIC15: mlx5_15 NIC16: mlx5_16 NIC17: mlx5_17

ulimit soft: 65535

merrymercy commented 2 months ago
  1. Try to set "echo": True in the OpenAI API
  2. Try to use the native /generate API, which allows you to set logprob_start_len https://github.com/sgl-project/sglang/blob/dff2860a690757966e408b598a8f0b47a29a4713/docs/en/sampling_params.md?plain=1#L7-L32
dmakhervaks commented 2 months ago

@merrymercy

when I try to add echo, i get the same response - without logprobs for the input

Request: curl http://10.20.45.24:9942/v1/completions -H "Content-Type: application/json" -H "Authorization: Bearer" -d '{ "model": "share5/projects/xlab/ZLM/models/Reflection-Llama-3.1-70B", "prompt": "Say this is a test", "max_tokens": 1, "temperature": 0, "logprobs": 1, "logprob_start_len": 0, "echo": true }'

Response: {'choices': [{'finish_reason': 'length', 'index': 0, 'logprobs': {'text_offset': [-1], 'token_logprobs': [-2.009894609451294], 'tokens': [' of'], 'top_logprobs': [{' of': -2.009894609451294}]}, 'text': 'Say this is a test of'}], 'created': 1725992782, 'id': '14156d388e2243d2a29099688f65d55f', 'model': 'share5/projects/xlab/ZLM/models/Reflection-Llama-3.1-70B', 'object': 'text_completion', 'usage': {'completion_tokens': 1, 'prompt_tokens': 6, 'total_tokens': 7}}

dmakhervaks commented 2 months ago

@merrymercy I feel like this line may be the problem? https://github.com/sgl-project/sglang/blob/144bc70fcceede77fc2c2fbd286676b57f9a0c94/python/sglang/srt/openai_api/adapter.py#L503C9-L503C27

merrymercy commented 2 months ago

We follow the openai api spec to implement the v1/completion/ endpoint. Even openai does not support returning the logprobs of prompts.

See this failed example.

(dev) lianmin@Lianmins-MacBook-Pro ~ % cat aha.py
from openai import OpenAI
client = OpenAI()

ret = client.completions.create(
  model="gpt-3.5-turbo-instruct",
  prompt="Say this is a test",
  max_tokens=7,
  temperature=0,
  echo=True,
  logprobs=5,
)

print(ret)
(dev) lianmin@Lianmins-MacBook-Pro ~ % python3 aha.py
Traceback (most recent call last):
  File "/Users/lianmin/aha.py", line 4, in <module>
    ret = client.completions.create(
  File "/Users/lianmin/miniconda3/envs/dev/lib/python3.10/site-packages/openai/_utils/_utils.py", line 277, in wrapper
    return func(*args, **kwargs)
  File "/Users/lianmin/miniconda3/envs/dev/lib/python3.10/site-packages/openai/resources/completions.py", line 528, in create
    return self._post(
  File "/Users/lianmin/miniconda3/envs/dev/lib/python3.10/site-packages/openai/_base_client.py", line 1240, in post
    return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls))
  File "/Users/lianmin/miniconda3/envs/dev/lib/python3.10/site-packages/openai/_base_client.py", line 921, in request
    return self._request(
  File "/Users/lianmin/miniconda3/envs/dev/lib/python3.10/site-packages/openai/_base_client.py", line 1020, in _request
    raise self._make_status_error_from_response(err.response) from None
openai.BadRequestError: Error code: 400 - {'error': {'message': "Setting 'echo' and 'logprobs' at the same time is not supported for this model.", 'type': 'invalid_request_error', 'param': 'echo', 'code': 'invalid_parameter_combination'}}

To get the logprob, please use our native /generate endpoint