triton-inference-server / tensorrtllm_backend

The Triton TensorRT-LLM Backend
Apache License 2.0
581 stars 81 forks source link

"error":"Unable to parse 'data': Shape does not match true shape of 'data' field" #502

Closed ljm565 closed 1 week ago

ljm565 commented 1 week ago

System Info

While TensorRT LLM model is successfully loaded like the below image. Screenshot from 2024-06-18 14-45-39

However, when I post via curl command, error occurs like the below image's error.

# curl command
curl -X POST localhost:8000/v2/models/tensorrt_llm/generate -d '{"input_ids": [0, 1, 2], "input_lengths": [10], "request_output_len": [10]}'

Screenshot from 2024-06-18 14-47-00

However, if I set input_ids as [0] (length 1), it does not make error.

# curl command
curl -X POST localhost:8000/v2/models/tensorrt_llm/generate -d '{"input_ids": [0], "input_lengths": [10], "request_output_len": [10]}'

Screenshot from 2024-06-18 14-51-32

My model's backend is tensorrtllm and the below is config.pbtxt of mine.

# Copyright 2024, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions
# are met:
#  * Redistributions of source code must retain the above copyright
#    notice, this list of conditions and the following disclaimer.
#  * Redistributions in binary form must reproduce the above copyright
#    notice, this list of conditions and the following disclaimer in the
#    documentation and/or other materials provided with the distribution.
#  * Neither the name of NVIDIA CORPORATION nor the names of its
#    contributors may be used to endorse or promote products derived
#    from this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS ``AS IS'' AND ANY
# EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
# PURPOSE ARE DISCLAIMED.  IN NO EVENT SHALL THE COPYRIGHT OWNER OR
# CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
# EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
# PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
# PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY
# OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

name: "tensorrt_llm"
backend: "tensorrtllm"
max_batch_size: 4

model_transaction_policy {
  decoupled: true
}

dynamic_batching {
    preferred_batch_size: [ 1 ]
    max_queue_delay_microseconds: 1000
}

input [
  {
    name: "input_ids"
    data_type: TYPE_INT32
    dims: [ -1 ]
    allow_ragged_batch: false
  },
  {
    name: "input_lengths"
    data_type: TYPE_INT32
    dims: [ 1 ]
    reshape: { shape: [ ] }
  },
  {
    name: "request_output_len"
    data_type: TYPE_INT32
    dims: [ 1 ]
  },
  {
    name: "draft_input_ids"
    data_type: TYPE_INT32
    dims: [ -1 ]
    optional: true
    allow_ragged_batch: true
  },
  {
    name: "draft_logits"
    data_type: TYPE_FP32
    dims: [ -1, -1 ]
    optional: true
    allow_ragged_batch: true
  },
  {
    name: "draft_acceptance_threshold"
    data_type: TYPE_FP32
    dims: [ 1 ]
    reshape: { shape: [ ] }
    optional: true
  },
  {
    name: "end_id"
    data_type: TYPE_INT32
    dims: [ 1 ]
    reshape: { shape: [ ] }
    optional: true
  },
  {
    name: "pad_id"
    data_type: TYPE_INT32
    dims: [ 1 ]
    reshape: { shape: [ ] }
    optional: true
  },
  {
    name: "stop_words_list"
    data_type: TYPE_INT32
    dims: [ 2, -1 ]
    optional: true
    allow_ragged_batch: true
  },
  {
    name: "bad_words_list"
    data_type: TYPE_INT32
    dims: [ 2, -1 ]
    optional: true
    allow_ragged_batch: true
  },
  {
    name: "embedding_bias"
    data_type: TYPE_FP32
    dims: [ -1 ]
    optional: true
    allow_ragged_batch: true
  },
  {
    name: "beam_width"
    data_type: TYPE_INT32
    dims: [ 1 ]
    reshape: { shape: [ ] }
    optional: true
  },
  {
    name: "temperature"
    data_type: TYPE_FP32
    dims: [ 1 ]
    reshape: { shape: [ ] }
    optional: true
  },
  {
    name: "runtime_top_k"
    data_type: TYPE_INT32
    dims: [ 1 ]
    reshape: { shape: [ ] }
    optional: true
  },
  {
    name: "runtime_top_p"
    data_type: TYPE_FP32
    dims: [ 1 ]
    reshape: { shape: [ ] }
    optional: true
  },
  {
    name: "runtime_top_p_min"
    data_type: TYPE_FP32
    dims: [ 1 ]
    reshape: { shape: [ ] }
    optional: true
  },
  {
    name: "runtime_top_p_decay"
    data_type: TYPE_FP32
    dims: [ 1 ]
    reshape: { shape: [ ] }
    optional: true
  },
  {
    name: "runtime_top_p_reset_ids"
    data_type: TYPE_INT32
    dims: [ 1 ]
    reshape: { shape: [ ] }
    optional: true
  },
  {
    name: "len_penalty"
    data_type: TYPE_FP32
    dims: [ 1 ]
    reshape: { shape: [ ] }
    optional: true
  },
  {
    name: "early_stopping"
    data_type: TYPE_BOOL
    dims: [ 1 ]
    reshape: { shape: [ ] }
    optional: true
  },
  {
    name: "repetition_penalty"
    data_type: TYPE_FP32
    dims: [ 1 ]
    reshape: { shape: [ ] }
    optional: true
  },
  {
    name: "min_length"
    data_type: TYPE_INT32
    dims: [ 1 ]
    reshape: { shape: [ ] }
    optional: true
  },
  {
    name: "beam_search_diversity_rate"
    data_type: TYPE_FP32
    dims: [ 1 ]
    reshape: { shape: [ ] }
    optional: true
  },
  {
    name: "presence_penalty"
    data_type: TYPE_FP32
    dims: [ 1 ]
    reshape: { shape: [ ] }
    optional: true
  },
  {
    name: "frequency_penalty"
    data_type: TYPE_FP32
    dims: [ 1 ]
    reshape: { shape: [ ] }
    optional: true
  },
  {
    name: "random_seed"
    data_type: TYPE_UINT64
    dims: [ 1 ]
    reshape: { shape: [ ] }
    optional: true
  },
  {
    name: "return_log_probs"
    data_type: TYPE_BOOL
    dims: [ 1 ]
    reshape: { shape: [ ] }
    optional: true
  },
  {
    name: "return_context_logits"
    data_type: TYPE_BOOL
    dims: [ 1 ]
    reshape: { shape: [ ] }
    optional: true
  },
  {
    name: "return_generation_logits"
    data_type: TYPE_BOOL
    dims: [ 1 ]
    reshape: { shape: [ ] }
    optional: true
  },
  {
    name: "stop"
    data_type: TYPE_BOOL
    dims: [ 1 ]
    optional: true
  },
  {
    name: "streaming"
    data_type: TYPE_BOOL
    dims: [ 1 ]
    optional: true
  },
  {
    name: "prompt_embedding_table"
    data_type: TYPE_FP16
    dims: [ -1, -1 ]
    optional: true
    allow_ragged_batch: true
  },
  {
    name: "prompt_vocab_size"
    data_type: TYPE_INT32
    dims: [ 1 ]
    reshape: { shape: [ ] }
    optional: true
  },
  # the unique task ID for the given LoRA.
  # To perform inference with a specific LoRA for the first time `lora_task_id` `lora_weights` and `lora_config` must all be given.
  # The LoRA will be cached, so that subsequent requests for the same task only require `lora_task_id`.
  # If the cache is full the oldest LoRA will be evicted to make space for new ones.  An error is returned if `lora_task_id` is not cached.
  {
    name: "lora_task_id"
    data_type: TYPE_UINT64
    dims: [ 1 ]
    reshape: { shape: [ ] }
    optional: true
  },
  # weights for a lora adapter shape [ num_lora_modules_layers, D x Hi + Ho x D ]
  # where the last dimension holds the in / out adapter weights for the associated module (e.g. attn_qkv) and model layer
  # each of the in / out tensors are first flattened and then concatenated together in the format above.
  # D=adapter_size (R value), Hi=hidden_size_in, Ho=hidden_size_out.
  {
    name: "lora_weights"
    data_type: TYPE_FP16
    dims: [ -1, -1 ]
    optional: true
    allow_ragged_batch: true
  },
  # module identifier (same size a first dimension of lora_weights)
  # See LoraModule::ModuleType for model id mapping
  #
  # "attn_qkv": 0     # compbined qkv adapter
  # "attn_q": 1       # q adapter
  # "attn_k": 2       # k adapter
  # "attn_v": 3       # v adapter
  # "attn_dense": 4   # adapter for the dense layer in attention
  # "mlp_h_to_4h": 5  # for llama2 adapter for gated mlp layer after attention / RMSNorm: up projection
  # "mlp_4h_to_h": 6  # for llama2 adapter for gated mlp layer after attention / RMSNorm: down projection
  # "mlp_gate": 7     # for llama2 adapter for gated mlp later after attention / RMSNorm: gate
  #
  # last dim holds [ module_id, layer_idx, adapter_size (D aka R value) ]
  {
    name: "lora_config"
    data_type: TYPE_INT32
    dims: [ -1, 3 ]
    optional: true
    allow_ragged_batch: true
  }
]
output [
  {
    name: "output_ids"
    data_type: TYPE_INT32
    dims: [ -1, -1 ]
  },
  {
    name: "sequence_length"
    data_type: TYPE_INT32
    dims: [ -1 ]
  },
  {
    name: "cum_log_probs"
    data_type: TYPE_FP32
    dims: [ -1 ]
  },
  {
    name: "output_log_probs"
    data_type: TYPE_FP32
    dims: [ -1, -1 ]
  },
  {
    name: "context_logits"
    data_type: TYPE_FP32
    dims: [ -1, -1 ]
  },
  {
    name: "generation_logits"
    data_type: TYPE_FP32
    dims: [ -1, -1, -1 ]
  }
]
#instance_group [
#  {
#    count: 1
#    kind : KIND_CPU
#  }
#]
parameters: {
  key: "max_beam_width"
  value: {
    string_value: "1"
  }
}
parameters: {
  key: "FORCE_CPU_ONLY_INPUT_TENSORS"
  value: {
    string_value: "no"
  }
}
parameters: {
  key: "gpt_model_type"
  value: {
    string_value: "V1"
  }
}
parameters: {
  key: "gpt_model_path"
  value: {
    string_value: "/tensorrtllm_backend/triton_model_repo/tensorrt_llm/1"
  }
}
# parameters: {
#   key: "max_tokens_in_paged_kv_cache"
#   value: {
#     string_value: ""
#   }
# }
# parameters: {
#   key: "max_attention_window_size"
#   value: {
#     string_value: ""
#   }
# }
# parameters: {
#   key: "sink_token_length"
#   value: {
#     string_value: ""
#   }
# }
# parameters: {
#   key: "batch_scheduler_policy"
#   value: {
#     string_value: ""
#   }
# }
# parameters: {
#   key: "kv_cache_free_gpu_mem_fraction"
#   value: {
#     string_value: ""
#   }
# }
# parameters: {
#   key: "kv_cache_host_memory_bytes"
#   value: {
#     string_value: ""
#   }
# }
# parameters: {
#   key: "kv_cache_onboard_blocks"
#   value: {
#     string_value: ""
#   }
# }
# enable_trt_overlap is deprecated and doesn't have any effect on the runtime
# parameters: {
#   key: "enable_trt_overlap"
#   value: {
#     string_value: ""
#   }
# }
# parameters: {
#   key: "exclude_input_in_output"
#   value: {
#     string_value: ""
#   }
# }
# parameters: {
#   key: "cancellation_check_period_ms"
#   value: {
#     string_value: ""
#   }
# }
# parameters: {
#   key: "stats_check_period_ms"
#   value: {
#     string_value: ""
#   }
# }
# parameters: {
#   key: "iter_stats_max_iterations"
#   value: {
#     string_value: ""
#   }
# }
# parameters: {
#   key: "request_stats_max_iterations"
#   value: {
#     string_value: ""
#   }
# }
# parameters: {
#   key: "enable_kv_cache_reuse"
#   value: {
#     string_value: ""
#   }
# }
# parameters: {
#   key: "normalize_log_probs"
#   value: {
#     string_value: ""
#   }
# }
# parameters: {
#   key: "enable_chunked_context"
#   value: {
#     string_value: ""
#   }
# }
# parameters: {
#   key: "gpu_device_ids"
#   value: {
#     string_value: ""
#   }
# }
# parameters: {
#   key: "lora_cache_optimal_adapter_size"
#   value: {
#     string_value: ""
#   }
# }
# parameters: {
#   key: "lora_cache_max_adapter_size"
#   value: {
#     string_value: ""
#   }
# }
# parameters: {
#   key: "lora_cache_gpu_memory_fraction"
#   value: {
#     string_value: ""
#   }
# }
# parameters: {
#   key: "lora_cache_host_memory_bytes"
#   value: {
#     string_value: ""
#   }
# }
# parameters: {
#   key: "decoding_mode"
#   value: {
#     string_value: ""
#   }
# }
# parameters: {
#   key: "executor_worker_path"
#   value: {
#     string_value: "/opt/tritonserver/backends/tensorrtllm/trtllmExecutorWorker"
#   }
# }
# parameters: {
#   key: "medusa_choices"
#     value: {
#       string_value: ""
#   }
# }
# parameters: {
#   key: "gpu_weights_percent"
#     value: {
#       string_value: ""
#   }
# }

Who can help?

No response

Information

Tasks

Reproduction

# not error curl command
curl -X POST localhost:8000/v2/models/tensorrt_llm/generate -d '{"input_ids": [0], "input_lengths": [10], "request_output_len": [10]}'

# error curl command
curl -X POST localhost:8000/v2/models/tensorrt_llm/generate -d '{"input_ids": [0,1,2], "input_lengths": [10], "request_output_len": [10]}'

Expected behavior

When providing an input of length n as a list, it should be processed without errors and generate the expected output.

actual behavior

"error":"Unable to parse 'data': Shape does not match true shape of 'data' field"

additional notes

TensorRT-LLM == 0.10.0 TensorRTLLM-backend == 0.10.0 Model: Phi3

ljm565 commented 1 week ago

This is solved. I have to make input as follows: [100, 21, 32, 5] -> [[100], [21], [32], [5]]