Closed chuangzhidan closed 2 months ago
You need to use --tp-size [#_OF_DEVICES]
to enable tensor parallelization. Otherwise SGL will only use 1 GPU by default
You need to use
--tp-size [#_OF_DEVICES]
to enable tensor parallelization. Otherwise SGL will only use 1 GPU by default
thank you so much. it works like charm: not sure if i can do anything that can make it faster
[00:19:29] server_args=ServerArgs(model_path='/workspace/model/Qwen2.5-72-int4', tokenizer_path='/workspace/model/Qwen2.5-72-int4', tokenizer_mode='auto', skip_tokenizer_init=False, load_format='auto', dtype='auto', kv_cache_dtype='auto', trust_remote_code=False, context_length=None, quantization='gptq_marlin', served_model_name='/workspace/model/Qwen2.5-72-int4', chat_template=None, is_embedding=False, host='0.0.0.0', port=30000, additional_ports=[30001, 30002, 30003, 30004], mem_fraction_static=0.9, max_running_requests=None, max_num_reqs=None, max_total_tokens=None, chunked_prefill_size=8192, max_prefill_tokens=16384, schedule_policy='lpm', schedule_conservativeness=1.0, tp_size=2, stream_interval=1, random_seed=595017997, log_level='info', log_level_http=None, log_requests=False, show_time_cost=False, api_key=None, file_storage_pth='SGLang_storage', dp_size=1, load_balance_method='round_robin', disable_flashinfer=False, disable_flashinfer_sampling=False, disable_radix_cache=False, disable_regex_jump_forward=False, disable_cuda_graph=True, disable_cuda_graph_padding=False, disable_disk_cache=False, disable_custom_all_reduce=False, enable_mixed_chunk=False, enable_torch_compile=False, enable_p2p_check=False, enable_mla=False, triton_attention_reduce_in_fp32=False, nccl_init_addr=None, nnodes=1, node_rank=None)
[00:23:56 TP0] Load weight end. type=Qwen2ForCausalLM, dtype=torch.float16, avail mem=14.86 GB [00:23:56 TP1] Load weight end. type=Qwen2ForCausalLM, dtype=torch.float16, avail mem=18.11 GB [00:23:58 TP1] Memory pool end. avail mem=9.63 GB [00:23:58 TP0] Memory pool end. avail mem=6.32 GB [00:24:01 TP0] max_total_num_tokens=74912, max_prefill_tokens=16384, max_running_requests=2047, context_len=32768 [00:24:01 TP1] max_total_num_tokens=74912, max_prefill_tokens=16384, max_running_requests=2047, context_len=32768 INFO: Started server process [1] INFO: Waiting for application startup. INFO: Application startup complete. INFO: Uvicorn running on http://0.0.0.0:30000 (Press CTRL+C to quit) INFO: 127.0.0.1:40562 - "GET /get_model_info HTTP/1.1" 200 OK [00:24:03 TP0] Prefill batch. #new-seq: 1, #new-token: 6, #cached-token: 0, cache hit rate: 0.00%, #running-req: 0, #queue-req: 0 INFO: 127.0.0.1:40566 - "POST /generate HTTP/1.1" 200 OK [00:24:12] The server is fired up and ready to roll! INFO: 172.17.0.1:35430 - "GET /health HTTP/1.1" 200 OK [00:24:30 TP0] Prefill batch. #new-seq: 1, #new-token: 42, #cached-token: 0, cache hit rate: 0.00%, #running-req: 0, #queue-req: 0 [00:24:33 TP0] Decode batch. #running-req: 1, #token: 75, token usage: 0.00, gen throughput (token/s): 1.23, #queue-req: 0 [00:24:36 TP0] Decode batch. #running-req: 1, #token: 115, token usage: 0.00, gen throughput (token/s): 13.48, #queue-req: 0 INFO: 192.168.28.101:51561 - "POST /v1/chat/completions HTTP/1.1" 200 OK [00:24:59 TP0] Prefill batch. #new-seq: 1, #new-token: 1, #cached-token: 41, cache hit rate: 45.56%, #running-req: 0, #queue-req: 0 [00:25:00 TP0] Decode batch. #running-req: 1, #token: 56, token usage: 0.00, gen throughput (token/s): 1.65, #queue-req: 0 [00:25:06 TP0] Decode batch. #running-req: 1, #token: 96, token usage: 0.00, gen throughput (token/s): 7.54, #queue-req: 0 INFO: 172.17.0.1:55890 - "GET /health HTTP/1.1" 200 OK [00:25:11 TP0] Decode batch. #running-req: 1, #token: 136, token usage: 0.00, gen throughput (token/s): 7.06, #queue-req: 0 INFO: 192.168.28.101:51595 - "POST /v1/chat/completions HTTP/1.1" 200 OK INFO: 172.17.0.1:53822 - "GET /health HTTP/1.1" 200 OK INFO: 172.17.0.1:34902 - "GET /health HTTP/1.1" 200 OK INFO: 172.17.0.1:37092 - "GET /health HTTP/1.1" 200 OK [00:27:20 TP0] Prefill batch. #new-seq: 3, #new-token: 30, #cached-token: 42, cache hit rate: 51.23%, #running-req: 0, #queue-req: 0 [00:27:21 TP0] Prefill batch. #new-seq: 16, #new-token: 16, #cached-token: 368, cache hit rate: 82.60%, #running-req: 3, #queue-req: 81 [00:27:21 TP0] Prefill batch. #new-seq: 4, #new-token: 4, #cached-token: 92, cache hit rate: 84.58%, #running-req: 19, #queue-req: 77 [00:27:21 TP0] Prefill batch. #new-seq: 2, #new-token: 2, #cached-token: 46, cache hit rate: 85.36%, #running-req: 23, #queue-req: 75 [00:27:23 TP0] Prefill batch. #new-seq: 1, #new-token: 1, #cached-token: 23, cache hit rate: 85.71%, #running-req: 24, #queue-req: 74 INFO: 10.0.18.15:35886 - "POST /v1/chat/completions HTTP/1.1" 200 OK [00:27:23 TP0] Prefill batch. #new-seq: 1, #new-token: 1, #cached-token: 23, cache hit rate: 86.04%, #running-req: 25, #queue-req: 73 INFO: 10.0.18.15:35718 - "POST /v1/chat/completions HTTP/1.1" 200 OK INFO: 10.0.18.15:35738 - "POST /v1/chat/completions HTTP/1.1" 200 OK INFO: 10.0.18.15:35786 - "POST /v1/chat/completions HTTP/1.1" 200 OK INFO: 10.0.18.15:35840 - "POST /v1/chat/completions HTTP/1.1" 200 OK [00:27:24 TP0] Prefill batch. #new-seq: 16, #new-token: 16, #cached-token: 368, cache hit rate: 89.39%, #running-req: 2, #queue-req: 57 INFO: 10.0.18.15:35660 - "POST /v1/chat/completions HTTP/1.1" 200 OK INFO: 10.0.18.15:35758 - "POST /v1/chat/completions HTTP/1.1" 200 OK INFO: 10.0.18.15:35824 - "POST /v1/chat/completions HTTP/1.1" 200 OK INFO: 10.0.18.15:35828 - "POST /v1/chat/completions HTTP/1.1" 200 OK INFO: 10.0.18.15:35862 - "POST /v1/chat/completions HTTP/1.1" 200 OK INFO: 10.0.18.15:35872 - "POST /v1/chat/completions HTTP/1.1" 200 OK INFO: 10.0.18.15:35900 - "POST /v1/chat/completions HTTP/1.1" 200 OK INFO: 10.0.18.15:35912 - "POST /v1/chat/completions HTTP/1.1" 200 OK INFO: 10.0.18.15:35696 - "POST /v1/chat/completions HTTP/1.1" 200 OK INFO: 10.0.18.15:35706 - "POST /v1/chat/completions HTTP/1.1" 200 OK INFO: 10.0.18.15:35670 - "POST /v1/chat/completions HTTP/1.1" 200 OK INFO: 10.0.18.15:35686 - "POST /v1/chat/completions HTTP/1.1" 200 OK INFO: 10.0.18.15:35730 - "POST /v1/chat/completions HTTP/1.1" 200 OK INFO: 10.0.18.15:35754 - "POST /v1/chat/completions HTTP/1.1" 200 OK INFO: 10.0.18.15:35774 - "POST /v1/chat/completions HTTP/1.1" 200 OK INFO: 10.0.18.15:35802 - "POST /v1/chat/completions HTTP/1.1" 200 OK INFO: 10.0.18.15:35808 - "POST /v1/chat/completions HTTP/1.1" 200 OK INFO: 10.0.18.15:35830 - "POST /v1/chat/completions HTTP/1.1" 200 OK INFO: 10.0.18.15:35856 - "POST /v1/chat/completions HTTP/1.1" 200 OK INFO: 10.0.18.15:35896 - "POST /v1/chat/completions HTTP/1.1" 200 OK [00:27:24 TP0] Prefill batch. #new-seq: 6, #new-token: 6, #cached-token: 138, cache hit rate: 90.13%, #running-req: 18, #queue-req: 51 [00:27:24 TP0] Prefill batch. #new-seq: 2, #new-token: 2, #cached-token: 46, cache hit rate: 90.33%, #running-req: 24, #queue-req: 49 [00:27:25 TP0] Decode batch. #running-req: 26, #token: 174, token usage: 0.00, gen throughput (token/s): 5.31, #queue-req: 49 [00:27:25 TP0] Prefill batch. #new-seq: 1, #new-token: 1, #cached-token: 23, cache hit rate: 90.43%, #running-req: 26, #queue-req: 48 [00:27:26 TP0] Prefill batch. #new-seq: 1, #new-token: 1, #cached-token: 23, cache hit rate: 90.53%, #running-req: 25, #queue-req: 47 INFO: 10.0.18.15:35924 - "POST /v1/chat/completions HTTP/1.1" 200 OK INFO: 10.0.18.15:35934 - "POST /v1/chat/completions HTTP/1.1" 200 OK [00:27:26 TP0] Prefill batch. #new-seq: 1, #new-token: 1, #cached-token: 23, cache hit rate: 90.62%, #running-req: 26, #queue-req: 46
See hyperparameter_tuning.md on tuning hyperparameters for better performance.
See hyperparameter_tuning.md on tuning hyperparameters for better performance.
thank you!
close the issue as it has been solved.
Checklist
Describe the bug
File "/usr/local/lib/python3.10/dist-packages/torch/utils/_device.py", line 79, in __torch_function__ return func(*args, **kwargs) torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 2.32 GiB. GPU 0 has a total capacity of 79.25 GiB of which 2.27 GiB is free. Including non-PyTorch memory, this process has 0 bytes memory in use. Of the allocated memory 36.46 GiB is allocated by PyTorch, and 20.98 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
Reproduction
docker run --gpus all -it -p 30000:30000 -v ~/.cache/huggingface:/root/.cache/huggingface -v /data/xgp:/workspace -v /data/llm:/workspace/model --env "HF_TOKEN=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" --env CUDA_VISIBLE_DEVICES=0,1 --ipc=host lmsysorg/sglang:latest python3 -m sglang.launch_server --model-path /workspace/model/Qwen2.5-72-int4 --host 0.0.0.0 --port 30000 --quantization gptq_marlin --mem-fraction-static 0.9 --disable-cuda-grap
Environment
Python: 3.12.4 | packaged by Anaconda, Inc. | (main, Jun 18 2024, 15:12:24) [GCC 11.2.0] CUDA available: True GPU 0,1: NVIDIA A800 80GB PCIe GPU 0,1 Compute Capability: 8.0 CUDA_HOME: /usr NVCC: Cuda compilation tools, release 11.5, V11.5.119 CUDA Driver Version: 560.35.03 PyTorch: 2.4.0+cu121 sglang: 0.3.0 flashinfer: Module Not Found triton: 3.0.0 transformers: 4.44.2 requests: 2.32.3 tqdm: 4.66.5 numpy: 1.26.4 aiohttp: 3.10.5 fastapi: 0.113.0 hf_transfer: Module Not Found huggingface_hub: 0.24.6 interegular: 0.3.3 packaging: 24.1 PIL: 10.4.0 psutil: 6.0.0 pydantic: 2.9.0 uvicorn: 0.30.6 uvloop: 0.20.0 zmq: 26.2.0 vllm: 0.6.0 multipart: Module Not Found openai: 1.43.1 anthropic: Module Not Found litellm: Module Not Found NVIDIA Topology: GPU0 GPU1 CPU Affinity NUMA Affinity GPU NUMA ID GPU0 X NODE 0-23,48-71 0 N/A GPU1 NODE X 0-23,48-71 0 N/A
Legend:
X = Self SYS = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI) NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node PHB = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU) PXB = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge) PIX = Connection traversing at most a single PCIe bridge NV# = Connection traversing a bonded set of # NVLinks
ulimit soft: 1024