Open George-ao opened 2 weeks ago
cc @robertgshaw2-neuralmagic
What GPU type are you on?
You don’t need to specify the —quantization argument. We automatically detect the quantization type based on the config and use the marlin kernels if possible. Specifically the marlin kernels require at least ampere
I will improve the error message here
What GPU type are you on?
You don’t need to specify the —quantization argument. We automatically detect the quantization type based on the config and use the marlin kernels if possible. Specifically the marlin kernels require at least ampere
I will improve the error message here
Thanks for reply. My GPU type is NVIDIA A100-SXM4-40GB. I find that it works when not specifying the argument. By the way, I have a small question, is it possible to use GPTQ without marlin kernel since vLLM can automatically detect it?
Passing just —quantization gptq will use the slow kernels
I’ll look into why this error is being thrown tomorrow.
thanks for reporting!
Passing just —quantization gptq will use the slow kernels
I’ll look into why this error is being thrown tomorrow.
thanks for reporting!
Thanks!
What GPU type are you on?
You don’t need to specify the —quantization argument. We automatically detect the quantization type based on the config and use the marlin kernels if possible. Specifically the marlin kernels require at least ampere
I will improve the error message here
Does gptq_marlin support chunked prefill?
I tried to run benchmark_serving to measure throughput. It works well when I do not select --enable-chunked-prefill, but it has some errors when I try to use chunked prefill.
Here are the commands I used:
On the server side,
python -m vllm.entrypoints.openai.api_server --disable-log-requests --model TheBloke/Llama-2-7B-Chat-GPTQ --seed 0 --enable-chunked-prefill
On the clinet side
python benchmarks/benchmark_serving.py --model TheBloke/Llama-2-7B-Chat-GPTQ --dataset-path /u/yao1/ShareGPT_V3_unfiltered_cleaned_split.json --request-rate 10 --num-prompts 100 --seed 0
Here is the printout on the server side:
INFO: ::1:44634 - "POST /v1/completions HTTP/1.1" 200 OK
ERROR: Exception in ASGI application
Traceback (most recent call last):
File "/u/yao1/anaconda3/envs/vllm_env/lib/python3.9/site-packages/starlette/responses.py", line 265, in __call__
await wrap(partial(self.listen_for_disconnect, receive))
File "/u/yao1/anaconda3/envs/vllm_env/lib/python3.9/site-packages/starlette/responses.py", line 261, in wrap
await func()
File "/u/yao1/anaconda3/envs/vllm_env/lib/python3.9/site-packages/starlette/responses.py", line 238, in listen_for_disconnect
message = await receive()
File "/u/yao1/anaconda3/envs/vllm_env/lib/python3.9/site-packages/uvicorn/protocols/http/httptools_impl.py", line 553, in receive
await self.message_event.wait()
File "/u/yao1/anaconda3/envs/vllm_env/lib/python3.9/asyncio/locks.py", line 226, in wait
await fut
asyncio.exceptions.CancelledError: Cancelled by cancel scope 7f146c70d580
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/u/yao1/anaconda3/envs/vllm_env/lib/python3.9/site-packages/uvicorn/protocols/http/httptools_impl.py", line 399, in run_asgi
result = await app( # type: ignore[func-returns-value]
File "/u/yao1/anaconda3/envs/vllm_env/lib/python3.9/site-packages/uvicorn/middleware/proxy_headers.py", line 70, in __call__
return await self.app(scope, receive, send)
File "/u/yao1/anaconda3/envs/vllm_env/lib/python3.9/site-packages/fastapi/applications.py", line 1054, in __call__
await super().__call__(scope, receive, send)
File "/u/yao1/anaconda3/envs/vllm_env/lib/python3.9/site-packages/starlette/applications.py", line 123, in __call__
await self.middleware_stack(scope, receive, send)
File "/u/yao1/anaconda3/envs/vllm_env/lib/python3.9/site-packages/starlette/middleware/errors.py", line 186, in __call__
raise exc
File "/u/yao1/anaconda3/envs/vllm_env/lib/python3.9/site-packages/starlette/middleware/errors.py", line 164, in __call__
await self.app(scope, receive, _send)
File "/u/yao1/anaconda3/envs/vllm_env/lib/python3.9/site-packages/starlette/middleware/cors.py", line 85, in __call__
await self.app(scope, receive, send)
File "/u/yao1/anaconda3/envs/vllm_env/lib/python3.9/site-packages/starlette/middleware/exceptions.py", line 65, in __call__
await wrap_app_handling_exceptions(self.app, conn)(scope, receive, send)
File "/u/yao1/anaconda3/envs/vllm_env/lib/python3.9/site-packages/starlette/_exception_handler.py", line 64, in wrapped_app
raise exc
File "/u/yao1/anaconda3/envs/vllm_env/lib/python3.9/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app
await app(scope, receive, sender)
File "/u/yao1/anaconda3/envs/vllm_env/lib/python3.9/site-packages/starlette/routing.py", line 756, in __call__
await self.middleware_stack(scope, receive, send)
File "/u/yao1/anaconda3/envs/vllm_env/lib/python3.9/site-packages/starlette/routing.py", line 776, in app
await route.handle(scope, receive, send)
File "/u/yao1/anaconda3/envs/vllm_env/lib/python3.9/site-packages/starlette/routing.py", line 297, in handle
await self.app(scope, receive, send)
File "/u/yao1/anaconda3/envs/vllm_env/lib/python3.9/site-packages/starlette/routing.py", line 77, in app
await wrap_app_handling_exceptions(app, request)(scope, receive, send)
File "/u/yao1/anaconda3/envs/vllm_env/lib/python3.9/site-packages/starlette/_exception_handler.py", line 64, in wrapped_app
raise exc
File "/u/yao1/anaconda3/envs/vllm_env/lib/python3.9/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app
await app(scope, receive, sender)
File "/u/yao1/anaconda3/envs/vllm_env/lib/python3.9/site-packages/starlette/routing.py", line 75, in app
await response(scope, receive, send)
File "/u/yao1/anaconda3/envs/vllm_env/lib/python3.9/site-packages/starlette/responses.py", line 265, in __call__
await wrap(partial(self.listen_for_disconnect, receive))
File "/u/yao1/anaconda3/envs/vllm_env/lib/python3.9/site-packages/anyio/_backends/_asyncio.py", line 680, in __aexit__
raise BaseExceptionGroup(
exceptiongroup.ExceptionGroup: unhandled errors in a TaskGroup (1 sub-exception)
Marlin kernel for the linear layers is completely orthogonal to chunked prefill
chunked prefill only impacts the attention calculation for model execution. Otherwise, the changes are just on the server side
Are there any other logs to share?
Are there any other logs to share?
What kinds of information can I provide?
Here is the printout on the client side. The progress bar is frozen at 98/100 and I use ctrl c to quit. And the printout on the server side is as above, which says "exceptiongroup.ExceptionGroup: unhandled errors in a TaskGroup (1 sub-exception)"
$ python benchmarks/benchmark_serving.py --model TheBloke/Llama-2-7B-Chat-GPTQ --dataset-path /u/yao1/ShareGPT_V3_unfiltered_cleaned_split.json --request-rate 10 --num-prompts 100 --seed 0
Namespace(backend='vllm', base_url=None, host='localhost', port=8000, endpoint='/v1/completions', dataset=None, dataset_name='sharegpt', dataset_path='/u/yao1/ShareGPT_V3_unfiltered_cleaned_split.json', model='TheBloke/Llama-2-7B-Chat-GPTQ', tokenizer=None, best_of=1, use_beam_search=False, num_prompts=100, sharegpt_output_len=None, sonnet_input_len=550, sonnet_output_len=150, sonnet_prefix_len=200, request_rate=10.0, seed=0, trust_remote_code=False, disable_tqdm=False, save_result=False, metadata=None, result_dir=None, result_filename=None)
You are using the default legacy behaviour of the <class 'transformers.models.llama.tokenization_llama_fast.LlamaTokenizerFast'>. This is expected, and simply means that the `legacy` (previous) behavior will be used so nothing changes for you. If you want to use the new behaviour, set `legacy=False`. This should only be set if you understand what it means, and thoroughly read the reason why this was added as explained in https://github.com/huggingface/transformers/pull/24565
Starting initial single prompt test run...
Initial test run completed.
Starting main benchmark run...
Traffic request rate: 10.0
98%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▌ | 98/100 [00:19<00:00, 12.85it/s]
^CTraceback (most recent call last):
File "/u/yao1/vllm/benchmarks/benchmark_serving.py", line 679, in <module>
main(args)
File "/u/yao1/vllm/benchmarks/benchmark_serving.py", line 478, in main
benchmark_result = asyncio.run(
File "/u/yao1/anaconda3/envs/vllm_env/lib/python3.9/asyncio/runners.py", line 44, in run
return loop.run_until_complete(main)
File "/u/yao1/anaconda3/envs/vllm_env/lib/python3.9/asyncio/base_events.py", line 634, in run_until_complete
self.run_forever()
File "/u/yao1/anaconda3/envs/vllm_env/lib/python3.9/asyncio/base_events.py", line 601, in run_forever
self._run_once()
File "/u/yao1/anaconda3/envs/vllm_env/lib/python3.9/asyncio/base_events.py", line 1869, in _run_once
event_list = self._selector.select(timeout)
File "/u/yao1/anaconda3/envs/vllm_env/lib/python3.9/selectors.py", line 469, in select
fd_event_list = self._selector.poll(timeout, max_ev)
KeyboardInterrupt
98%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▌ | 98/100 [01:41<00:02, 1.04s/it]
How would you like to use vllm
I want to run inference of a TheBloke/Llama-2-7B-Chat-GPTQ. I don't know how to use it with vllm. I try to use the api server.
python -m vllm.entrypoints.openai.api_server --disable-log-requests --model TheBloke/Llama-2-7B-Chat-GPTQ --quantization gptq_marlin --seed 0
But I get an error:$ python -m vllm.entrypoints.openai.api_server --disable-log-requests --model TheBloke/Llama-2-7B-Chat-GPTQ --quantization gptq_marlin --seed 0 INFO 06-15 00:04:48 api_server.py:177] vLLM API server version 0.5.0.post1 INFO 06-15 00:04:48 api_server.py:178] args: Namespace(host=None, port=8000, uvicorn_log_level='info', allow_credentials=False, allowed_origins=[''], allowed_methods=[''], allowed_headers=['*'], api_key=None, lora_modules=None, chat_template=None, response_role='assistant', ssl_keyfile=None, ssl_certfile=None, ssl_ca_certs=None, ssl_cert_reqs=0, root_path=None, middleware=[], model='TheBloke/Llama-2-7B-Chat-GPTQ', tokenizer=None, skip_tokenizer_init=False, revision=None, code_revision=None, tokenizer_revision=None, tokenizer_mode='auto', trust_remote_code=False, download_dir=None, load_format='auto', dtype='auto', kv_cache_dtype='auto', quantization_param_path=None, max_model_len=None, guided_decoding_backend='outlines', distributed_executor_backend=None, worker_use_ray=False, pipeline_parallel_size=1, tensor_parallel_size=1, max_parallel_loading_workers=None, ray_workers_use_nsight=False, block_size=16, enable_prefix_caching=False, disable_sliding_window=False, use_v2_block_manager=False, num_lookahead_slots=0, seed=0, swap_space=4, gpu_memory_utilization=0.9, num_gpu_blocks_override=None, max_num_batched_tokens=None, max_num_seqs=256, max_logprobs=20, disable_log_stats=False, quantization='gptq_marlin', rope_scaling=None, rope_theta=None, enforce_eager=False, max_context_len_to_capture=None, max_seq_len_to_capture=8192, disable_custom_all_reduce=False, tokenizer_pool_size=0, tokenizer_pool_type='ray', tokenizer_pool_extra_config=None, enable_lora=False, max_loras=1, max_lora_rank=16, lora_extra_vocab_size=256, lora_dtype='auto', long_lora_scaling_factors=None, max_cpu_loras=None, fully_sharded_loras=False, device='auto', image_input_type=None, image_token_id=None, image_input_shape=None, image_feature_size=None, image_processor=None, image_processor_revision=None, disable_image_processor=False, scheduler_delay_factor=0.0, enable_chunked_prefill=False, speculative_model=None, num_speculative_tokens=None, speculative_max_model_len=None, speculative_disable_by_batch_size=None, ngram_prompt_lookup_max=None, ngram_prompt_lookup_min=None, model_loader_extra_config=None, preemption_mode=None, served_model_name=None, qlora_adapter_name_or_path=None, engine_use_ray=False, disable_log_requests=True, max_log_len=None) /u/yao1/anaconda3/envs/vllm_env/lib/python3.9/site-packages/huggingface_hub/file_download.py:1132: FutureWarning:
engine = AsyncLLMEngine.from_engine_args(
File "/u/yao1/vllm/vllm/engine/async_llm_engine.py", line 371, in from_engine_args
engine_config = engine_args.create_engine_config()
File "/u/yao1/vllm/vllm/engine/arg_utils.py", line 630, in create_engine_config
model_config = ModelConfig(
File "/u/yao1/vllm/vllm/config.py", line 151, in init
self._verify_quantization()
File "/u/yao1/vllm/vllm/config.py", line 199, in _verify_quantization
raise ValueError(
ValueError: Quantization method specified in the model config (gptq) does not match the quantization method specified in the
resume_download
is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, useforce_download=True
. warnings.warn( Traceback (most recent call last): File "/u/yao1/anaconda3/envs/vllm_env/lib/python3.9/runpy.py", line 197, in _run_module_as_main return _run_code(code, main_globals, None, File "/u/yao1/anaconda3/envs/vllm_env/lib/python3.9/runpy.py", line 87, in _run_code exec(code, run_globals) File "/u/yao1/vllm/vllm/entrypoints/openai/api_server.py", line 196, inquantization
argument (gptq_marlin).`