kvcache-ai / ktransformers

A Flexible Framework for Experiencing Cutting-edge LLM Inference Optimizations
Apache License 2.0
653 stars 31 forks source link

When the input token exceeds 4096, an error will occur. #73

Closed fengyang95 closed 2 weeks ago

fengyang95 commented 2 weeks ago
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^

File "/home/tiger/.pyenv/versions/3.11.2/lib/python3.11/site-packages/fastapi/routing.py", line 210, in run_endpoint_function return await dependant.call(values) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/tiger/.pyenv/versions/3.11.2/lib/python3.11/site-packages/ktransformers/server/api/openai/endpoints/chat.py", line 32, in chat_completion async for token in interface.inference(input_message,id): File "/home/tiger/.pyenv/versions/3.11.2/lib/python3.11/site-packages/ktransformers/server/backend/interfaces/transformers.py", line 323, in inference for t in self.prefill(input_ids,self.check_is_new(thread_id)): File "/home/tiger/.pyenv/versions/3.11.2/lib/python3.11/site-packages/torch/utils/_contextlib.py", line 36, in generator_context response = gen.send(None) ^^^^^^^^^^^^^^ File "/home/tiger/.pyenv/versions/3.11.2/lib/python3.11/site-packages/ktransformers/server/backend/interfaces/transformers.py", line 272, in prefill logits = self.model( ^^^^^^^^^^^ File "/home/tiger/.pyenv/versions/3.11.2/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl return self._call_impl(*args, *kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/tiger/.pyenv/versions/3.11.2/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1562, in _call_impl return forward_call(args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/tiger/.pyenv/versions/3.11.2/lib/python3.11/site-packages/ktransformers/models/modeling_deepseek.py", line 1731, in forward outputs = self.model( ^^^^^^^^^^^ File "/home/tiger/.pyenv/versions/3.11.2/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl return self._call_impl(*args, *kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/tiger/.pyenv/versions/3.11.2/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1562, in _call_impl return forward_call(args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/tiger/.pyenv/versions/3.11.2/lib/python3.11/site-packages/ktransformers/operators/models.py", line 651, in forward causal_mask = self._update_causal_mask( ^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/tiger/.pyenv/versions/3.11.2/lib/python3.11/site-packages/ktransformers/models/modeling_deepseek.py", line 1624, in _update_causal_mask padding_mask = causal_mask[:, :, :, :mask_length] + attention_mask[:, None, None, :]


RuntimeError: The size of tensor a (4096) must match the size of tensor b (8122) at non-singleton dimension 3
Azure-Tang commented 2 weeks ago

Hi, I haven’t encountered this issue in our most recent code update. Could you please try using the latest version and let me know if the problem persists?

fengyang95 commented 2 weeks ago

Hi, I haven’t encountered this issue in our most recent code update. Could you please try using the latest version and let me know if the problem persists?

I compiled it using the most recent code, and my configs are:

- match:
    class: ktransformers.models.modeling_deepseek.DeepseekV2YarnRotaryEmbedding
  replace:
    class: ktransformers.operators.RoPE.YarnRotaryEmbedding
    kwargs:
      generate_device: "cuda"
      prefill_device: "cuda"
- match:
    name: "^model\\.layers\\.(?!.*self_attn\\.kv_b_proj).*$"  # regular expression
    class: torch.nn.Linear  # only match modules matching name and class simultaneously
  replace:
    class: ktransformers.operators.linear.KTransformersLinear  # optimized Kernel on quantized data types
    kwargs:
      generate_device: "cuda"
      prefill_device: "cuda"
      generate_op: "KLinearMarlin"
      prefill_op: "KLinearTorch"
- match:
    name: "^model\\.layers\\..*\\.mlp$"
    class: ktransformers.models.modeling_deepseek.DeepseekV2MoE
  replace:
    class: ktransformers.operators.experts.KDeepseekV2MoE     # mlp module with custom forward function
    kwargs:
      generate_device: "cuda"
      prefill_device: "cuda"
- match:
    name: "^model\\.layers\\..*\\.mlp\\.experts$"
  replace:
    class: ktransformers.operators.experts.KTransformersExperts     # custom MoE Kernel with expert paralleism
    kwargs:
      prefill_device: "cuda"
      prefill_op: "KExpertsTorch"
      generate_device: "cpu"
      generate_op: "KExpertsCPU"
      out_device: "cuda"
  recursive: False # don't recursively inject submodules of this module
- match:
    name: "^model\\.layers\\..*\\.self_attn$"
  replace:
    class: ktransformers.operators.attention.KDeepseekV2Attention # optimized MLA implementation
    kwargs:
      generate_device: "cuda"
      prefill_device: "cuda"
- match:
    name: "^model$"
  replace:
    class: "ktransformers.operators.models.KDeepseekV2Model"
    kwargs:
      per_layer_prefill_intput_threshold: 0 # 0 is close layer wise prefill
- match:
    name: "^model.embed_tokens"
  replace:
    class: "default"
    kwargs:
      generate_device: "cpu"
      prefill_device: "cuda"
UnicornChan commented 2 weeks ago

I'm very sorry. For the convenience of server testing, we set the cache_lens parameter to 4096, which has caused errors when the cache length exceeds this value. This configuration item is hardcoded as "cache_lens" in ktransformers/server/backend/args.py. We will make these configuration items configurable in the next release. If you want to support more tokens now, I suggest to modify the config in /home/tiger/.pyenv/versions/3.11.2/lib/python3.11/site-packages/ktransformers/server/backend/args.py line 93, cache_lens

fengyang95 commented 2 weeks ago

I'm very sorry. For the convenience of server testing, we set the cache_lens parameter to 4096, which has caused errors when the cache length exceeds this value. This configuration item is hardcoded as "cache_lens" in ktransformers/server/backend/args.py. We will make these configuration items configurable in the next release. If you want to support more tokens now, I suggest to modify the config in /home/tiger/.pyenv/versions/3.11.2/lib/python3.11/site-packages/ktransformers/server/backend/args.py line 93, cache_lens

LGTM