triton-inference-server / tensorrtllm_backend

The Triton TensorRT-LLM Backend
Apache License 2.0
588 stars 81 forks source link

Question to model not found #393

Open geraldstanje opened 2 months ago

geraldstanje commented 2 months ago

Hi,

I'm trying to use tensorrt-llm with Triton server, but it cannot find my model. any idea why? it looks like my file is invalid: /tensorrtllm_backend/triton_model_repo/tensorrt_llm_bls/config.pbtxt

here is the config.pbtxt file:

cat /tensorrtllm_backend/triton_model_repo/tensorrt_llm/config.pbtxt
# Copyright 2024, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions
# are met:
#  * Redistributions of source code must retain the above copyright
#    notice, this list of conditions and the following disclaimer.
#  * Redistributions in binary form must reproduce the above copyright
#    notice, this list of conditions and the following disclaimer in the
#    documentation and/or other materials provided with the distribution.
#  * Neither the name of NVIDIA CORPORATION nor the names of its
#    contributors may be used to endorse or promote products derived
#    from this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS ``AS IS'' AND ANY
# EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
# PURPOSE ARE DISCLAIMED.  IN NO EVENT SHALL THE COPYRIGHT OWNER OR
# CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
# EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
# PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
# PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY
# OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
name: "tensorrt_llm"
backend: "tensorrtllm"
max_batch_size: ${triton_max_batch_size}
model_transaction_policy {
  decoupled: true
}
dynamic_batching {
    preferred_batch_size: [ ${triton_max_batch_size} ]
    max_queue_delay_microseconds: ${max_queue_delay_microseconds}
}
input [
  {
    name: "input_ids"
    data_type: TYPE_INT32
    dims: [ -1 ]
    allow_ragged_batch: true
  },
  {
    name: "input_lengths"
    data_type: TYPE_INT32
    dims: [ 1 ]
    reshape: { shape: [ ] }
  },
  {
    name: "request_output_len"
    data_type: TYPE_INT32
    dims: [ 1 ]
  },
  {
    name: "draft_input_ids"
    data_type: TYPE_INT32
    dims: [ -1 ]
    optional: true
    allow_ragged_batch: true
  },
  {
    name: "draft_logits"
    data_type: TYPE_FP32
    dims: [ -1, -1 ]
    optional: true
    allow_ragged_batch: true
  },
  {
    name: "end_id"
    data_type: TYPE_INT32
    dims: [ 1 ]
    reshape: { shape: [ ] }
    optional: true
  },
  {
    name: "pad_id"
    data_type: TYPE_INT32
    dims: [ 1 ]
    reshape: { shape: [ ] }
    optional: true
  },
  {
    name: "stop_words_list"
    data_type: TYPE_INT32
    dims: [ 2, -1 ]
    optional: true
    allow_ragged_batch: true
  },
  {
    name: "bad_words_list"
    data_type: TYPE_INT32
    dims: [ 2, -1 ]
    optional: true
    allow_ragged_batch: true
  },
  {
    name: "embedding_bias"
    data_type: TYPE_FP32
    dims: [ -1 ]
    optional: true
    allow_ragged_batch: true
  },
  {
    name: "beam_width"
    data_type: TYPE_INT32
    dims: [ 1 ]
    reshape: { shape: [ ] }
    optional: true
  },
  {
    name: "temperature"
    data_type: TYPE_FP32
    dims: [ 1 ]
    reshape: { shape: [ ] }
    optional: true
  },
  {
    name: "runtime_top_k"
    data_type: TYPE_INT32
    dims: [ 1 ]
    reshape: { shape: [ ] }
    optional: true
  },
  {
    name: "runtime_top_p"
    data_type: TYPE_FP32
    dims: [ 1 ]
    reshape: { shape: [ ] }
    optional: true
  },
  {
    name: "len_penalty"
    data_type: TYPE_FP32
    dims: [ 1 ]
    reshape: { shape: [ ] }
    optional: true
  },
  {
    name: "repetition_penalty"
    data_type: TYPE_FP32
    dims: [ 1 ]
    reshape: { shape: [ ] }
    optional: true
  },
  {
    name: "min_length"
    data_type: TYPE_INT32
    dims: [ 1 ]
    reshape: { shape: [ ] }
    optional: true
  },
  {
    name: "presence_penalty"
    data_type: TYPE_FP32
    dims: [ 1 ]
    reshape: { shape: [ ] }
    optional: true
  },
  {
    name: "frequency_penalty"
    data_type: TYPE_FP32
    dims: [ 1 ]
    reshape: { shape: [ ] }
    optional: true
  },
  {
    name: "random_seed"
    data_type: TYPE_UINT64
    dims: [ 1 ]
    reshape: { shape: [ ] }
    optional: true
  },
  {
    name: "return_log_probs"
    data_type: TYPE_BOOL
    dims: [ 1 ]
    reshape: { shape: [ ] }
    optional: true
  },
  {
    name: "return_context_logits"
    data_type: TYPE_BOOL
    dims: [ 1 ]
    reshape: { shape: [ ] }
    optional: true
  },
  {
    name: "return_generation_logits"
    data_type: TYPE_BOOL
    dims: [ 1 ]
    reshape: { shape: [ ] }
    optional: true
  },
  {
    name: "stop"
    data_type: TYPE_BOOL
    dims: [ 1 ]
    optional: true
  },
  {
    name: "streaming"
    data_type: TYPE_BOOL
    dims: [ 1 ]
    optional: true
  },
  {
    name: "prompt_embedding_table"
    data_type: TYPE_FP16
    dims: [ -1, -1 ]
    optional: true
    allow_ragged_batch: true
  },
  {
    name: "prompt_vocab_size"
    data_type: TYPE_INT32
    dims: [ 1 ]
    reshape: { shape: [ ] }
    optional: true
  },
  # the unique task ID for the given LoRA.
  # To perform inference with a specific LoRA for the first time `lora_task_id` `lora_weights` and `lora_config` must all be given.
  # The LoRA will be cached, so that subsequent requests for the same task only require `lora_task_id`.
  # If the cache is full the oldest LoRA will be evicted to make space for new ones.  An error is returned if `lora_task_id` is not cached.
  {
    name: "lora_task_id"
       data_type: TYPE_UINT64
       dims: [ 1 ]
    reshape: { shape: [ ] }
       optional: true
  },
  # weights for a lora adapter shape [ num_lora_modules_layers, D x Hi + Ho x D ]
  # where the last dimension holds the in / out adapter weights for the associated module (e.g. attn_qkv) and model layer
  # each of the in / out tensors are first flattened and then concatenated together in the format above.
  # D=adapter_size (R value), Hi=hidden_size_in, Ho=hidden_size_out.
  {
    name: "lora_weights"
       data_type: TYPE_FP16
       dims: [ -1, -1 ]
       optional: true
       allow_ragged_batch: true
  },
  # module identifier (same size a first dimension of lora_weights)
  # See LoraModule::ModuleType for model id mapping
  #
  # "attn_qkv": 0     # compbined qkv adapter
  # "attn_q": 1       # q adapter
  # "attn_k": 2       # k adapter
  # "attn_v": 3       # v adapter
  # "attn_dense": 4   # adapter for the dense layer in attention
  # "mlp_h_to_4h": 5  # for llama2 adapter for gated mlp layer after attention / RMSNorm: up projection
  # "mlp_4h_to_h": 6  # for llama2 adapter for gated mlp layer after attention / RMSNorm: down projection
  # "mlp_gate": 7     # for llama2 adapter for gated mlp later after attention / RMSNorm: gate
  #
  # last dim holds [ module_id, layer_idx, adapter_size (D aka R value) ]
  {
    name: "lora_config"
       data_type: TYPE_INT32
       dims: [ -1, 3 ]
       optional: true
       allow_ragged_batch: true
  }
]
output [
  {
    name: "output_ids"
    data_type: TYPE_INT32
    dims: [ -1, -1 ]
  },
  {
    name: "sequence_length"
    data_type: TYPE_INT32
    dims: [ -1 ]
  },
  {
    name: "cum_log_probs"
    data_type: TYPE_FP32
    dims: [ -1 ]
  },
  {
    name: "output_log_probs"
    data_type: TYPE_FP32
    dims: [ -1, -1 ]
  },
  {
    name: "context_logits"
    data_type: TYPE_FP32
    dims: [ -1, -1 ]
  },
  {
    name: "generation_logits"
    data_type: TYPE_FP32
    dims: [ -1, -1, -1 ]
  }
]
instance_group [
  {
    count: 1
    kind : KIND_CPU
  }
]
parameters: {
  key: "max_beam_width"
  value: {
    string_value: "${max_beam_width}"
  }
}
parameters: {
  key: "FORCE_CPU_ONLY_INPUT_TENSORS"
  value: {
    string_value: "no"
  }
}
parameters: {
  key: "gpt_model_type"
  value: {
    string_value: "${batching_strategy}"
  }
}
parameters: {
  key: "gpt_model_path"
  value: {
    string_value: "/triton_model_repo/tensorrt_llm/1"
  }
}
parameters: {
  key: "max_tokens_in_paged_kv_cache"
  value: {
    string_value: ""
  }
}
parameters: {
  key: "max_attention_window_size"
  value: {
    string_value: "${max_attention_window_size}"
  }
}
parameters: {
  key: "batch_scheduler_policy"
  value: {
    string_value: "guaranteed_completion"
  }
}
parameters: {
  key: "kv_cache_free_gpu_mem_fraction"
  value: {
    string_value: "0.2"
  }
}
parameters: {
  key: "enable_trt_overlap"
  value: {
    string_value: "${enable_trt_overlap}"
  }
}
parameters: {
  key: "exclude_input_in_output"
  value: {
    string_value: "${exclude_input_in_output}"
  }
}
parameters: {
  key: "enable_kv_cache_reuse"
  value: {
    string_value: "${enable_kv_cache_reuse}"
  }
}
parameters: {
  key: "normalize_log_probs"
  value: {
    string_value: "${normalize_log_probs}"
  }
}
parameters: {
  key: "enable_chunked_context"
  value: {
    string_value: "${enable_chunked_context}"
  }
}
parameters: {
  key: "gpu_device_ids"
  value: {
    string_value: "${gpu_device_ids}"
  }
}
parameters: {
  key: "lora_cache_optimal_adapter_size"
  value: {
    string_value: "${lora_cache_optimal_adapter_size}"
  }
}
parameters: {
  key: "lora_cache_max_adapter_size"
  value: {
    string_value: "${lora_cache_max_adapter_size}"
  }
}
parameters: {
  key: "lora_cache_gpu_memory_fraction"
  value: {
    string_value: "${lora_cache_gpu_memory_fraction}"
  }
}
parameters: {
  key: "lora_cache_host_memory_bytes"
  value: {
    string_value: "${lora_cache_host_memory_bytes}"
  }
}
parameters: {
  key: "decoding_mode"
  value: {
    string_value: "${decoding_mode}"
  }
}
parameters: {
  key: "worker_path"
  value: {
    string_value: "/opt/tritonserver/backends/tensorrtllm/triton_tensorrtllm_worker"
  }
}
parameters: {
  key: "medusa_choices"
    value: {
      string_value: "${medusa_choices}"
  }
}

in tensorrt-llm:

# Build TRT-LLM engine:
python3 convert_checkpoint.py --model_dir /model_input/Llama-2-7b-hf/ \
                            --output_dir /model_output/llama/7B/trt_ckpt/fp16/4-gpu/ \
                            --dtype float16 \
                            --tp_size 4 

trtllm-build --checkpoint_dir /model_output/llama/7B/trt_ckpt/fp16/4-gpu/ \
            --output_dir /model_output/7B/trt_engines/fp16/4-gpu/ \
            --gemm_plugin float16

than in tensorrtllm_backend i run:

# check if trt files are there
ls ../model_output/llama/7B/trt_engines/fp16/4-gpu
config.json  rank0.engine  rank1.engine  rank2.engine  rank3.engine

# Create the model repository that will be used by the Triton server
cd tensorrtllm_backend
mkdir triton_model_repo

# Copy the example models to the model repository
cp -r all_models/inflight_batcher_llm/* triton_model_repo/

# Copy the TRT engine to triton_model_repo/tensorrt_llm/1/
cp ../model_output/llama/7B/trt_engines/fp16/4-gpu/* triton_model_repo/tensorrt_llm/1

# modify config for the model
python3 tools/fill_template.py --in_place \
      triton_model_repo/tensorrt_llm/config.pbtxt \
      decoupled_mode:true,engine_dir:/triton_model_repo/tensorrt_llm/1,\
max_tokens_in_paged_kv_cache:,batch_scheduler_policy:guaranteed_completion,kv_cache_free_gpu_mem_fraction:0.2,\
max_num_sequences:4

# modify config for the preprocessing component
python3 tools/fill_template.py --in_place \
    triton_model_repo/preprocessing/config.pbtxt \
    tokenizer_type:llama,tokenizer_dir:meta-llama/Llama-2-7b-hf

# modify config for the postprocessing component
python3 tools/fill_template.py --in_place \
    triton_model_repo/postprocessing/config.pbtxt \
    tokenizer_type:llama,tokenizer_dir:meta-llama/Llama-2-7b-hf

# Prepare The Triton Server
# Option 1. Launch Triton server within Triton NGC container
sudo docker run -it --rm --gpus all --network host \
--shm-size=2g --ulimit memlock=-1 --ulimit stack=67108864 \
-v /home/ubuntu/tensorrtllm_backend:/tensorrtllm_backend \
nvcr.io/nvidia/tritonserver:23.10-trtllm-python-py3 bash
cd /tensorrtllm_backend

# Next, in the Docker container, login to the HuggingFace Hub:
huggingface-cli login --token xxx

# Install python dependencies
pip install sentencepiece protobuf

# Launch Server
python3 scripts/launch_triton_server.py --world_size=4 --model_repo=/tensorrtllm_backend/triton_model_repo
root@xxx:/tensorrtllm_backend# I0407 20:46:45.537296 113 pinned_memory_manager.cc:241] Pinned memory pool is created at '0x7f05b6000000' with size 268435456
I0407 20:46:45.539035 114 pinned_memory_manager.cc:241] Pinned memory pool is created at '0x7a89d0000000' with size 268435456
I0407 20:46:45.539185 112 pinned_memory_manager.cc:241] Pinned memory pool is created at '0x7c18f0000000' with size 268435456
I0407 20:46:45.539364 111 pinned_memory_manager.cc:241] Pinned memory pool is created at '0x733800000000' with size 268435456
I0407 20:46:45.608010 113 cuda_memory_manager.cc:107] CUDA memory pool is created on device 0 with size 67108864
I0407 20:46:45.608030 113 cuda_memory_manager.cc:107] CUDA memory pool is created on device 1 with size 67108864
I0407 20:46:45.608035 113 cuda_memory_manager.cc:107] CUDA memory pool is created on device 2 with size 67108864
I0407 20:46:45.608040 113 cuda_memory_manager.cc:107] CUDA memory pool is created on device 3 with size 67108864
I0407 20:46:45.611534 114 cuda_memory_manager.cc:107] CUDA memory pool is created on device 0 with size 67108864
I0407 20:46:45.611553 114 cuda_memory_manager.cc:107] CUDA memory pool is created on device 1 with size 67108864
I0407 20:46:45.611559 114 cuda_memory_manager.cc:107] CUDA memory pool is created on device 2 with size 67108864
I0407 20:46:45.611564 114 cuda_memory_manager.cc:107] CUDA memory pool is created on device 3 with size 67108864
I0407 20:46:45.613995 112 cuda_memory_manager.cc:107] CUDA memory pool is created on device 0 with size 67108864
I0407 20:46:45.614014 112 cuda_memory_manager.cc:107] CUDA memory pool is created on device 1 with size 67108864
I0407 20:46:45.614020 112 cuda_memory_manager.cc:107] CUDA memory pool is created on device 2 with size 67108864
I0407 20:46:45.614024 112 cuda_memory_manager.cc:107] CUDA memory pool is created on device 3 with size 67108864
I0407 20:46:45.620874 111 cuda_memory_manager.cc:107] CUDA memory pool is created on device 0 with size 67108864
I0407 20:46:45.620893 111 cuda_memory_manager.cc:107] CUDA memory pool is created on device 1 with size 67108864
I0407 20:46:45.620899 111 cuda_memory_manager.cc:107] CUDA memory pool is created on device 2 with size 67108864
I0407 20:46:45.620904 111 cuda_memory_manager.cc:107] CUDA memory pool is created on device 3 with size 67108864
W0407 20:46:46.970163 114 server.cc:238] failed to enable peer access for some device pairs
[libprotobuf ERROR /tmp/tritonbuild/tritonserver/build/_deps/repo-third-party-build/grpc-repo/src/grpc/third_party/protobuf/src/google/protobuf/text_format.cc:335] Error parsing text-format inference.ModelConfig: 29:17: Expected integer, got: $
E0407 20:46:46.971359 114 model_repository_manager.cc:1309] Poll failed for model directory 'tensorrt_llm': failed to read text proto from /tensorrtllm_backend/triton_model_repo/tensorrt_llm/config.pbtxt
I0407 20:46:46.971399 114 server.cc:592] 
+------------------+------+
| Repository Agent | Path |
+------------------+------+
+------------------+------+
I0407 20:46:46.971420 114 server.cc:619] 
+---------+------+--------+
| Backend | Path | Config |
+---------+------+--------+
+---------+------+--------+
I0407 20:46:46.971431 114 server.cc:662] 
+-------+---------+--------+
| Model | Version | Status |
+-------+---------+--------+
+-------+---------+--------+
W0407 20:46:46.984858 111 server.cc:238] failed to enable peer access for some device pairs
[libprotobuf ERROR /tmp/tritonbuild/tritonserver/build/_deps/repo-third-party-build/grpc-repo/src/grpc/third_party/protobuf/src/google/protobuf/text_format.cc:335] Error parsing text-format inference.ModelConfig: 29:17: Expected integer, got: $
E0407 20:46:46.986017 111 model_repository_manager.cc:1309] Poll failed for model directory 'ensemble': failed to read text proto from /tensorrtllm_backend/triton_model_repo/ensemble/config.pbtxt
[libprotobuf ERROR /tmp/tritonbuild/tritonserver/build/_deps/repo-third-party-build/grpc-repo/src/grpc/third_party/protobuf/src/google/protobuf/text_format.cc:335] Error parsing text-format inference.ModelConfig: 29:17: Expected integer, got: $
E0407 20:46:46.986187 111 model_repository_manager.cc:1309] Poll failed for model directory 'postprocessing': failed to read text proto from /tensorrtllm_backend/triton_model_repo/postprocessing/config.pbtxt
[libprotobuf ERROR /tmp/tritonbuild/tritonserver/build/_deps/repo-third-party-build/grpc-repo/src/grpc/third_party/protobuf/src/google/protobuf/text_format.cc:335] Error parsing text-format inference.ModelConfig: 29:17: Expected integer, got: $
E0407 20:46:46.986315 111 model_repository_manager.cc:1309] Poll failed for model directory 'preprocessing': failed to read text proto from /tensorrtllm_backend/triton_model_repo/preprocessing/config.pbtxt
[libprotobuf ERROR /tmp/tritonbuild/tritonserver/build/_deps/repo-third-party-build/grpc-repo/src/grpc/third_party/protobuf/src/google/protobuf/text_format.cc:335] Error parsing text-format inference.ModelConfig: 29:17: Expected integer, got: $
E0407 20:46:46.986527 111 model_repository_manager.cc:1309] Poll failed for model directory 'tensorrt_llm': failed to read text proto from /tensorrtllm_backend/triton_model_repo/tensorrt_llm/config.pbtxt
[libprotobuf ERROR /tmp/tritonbuild/tritonserver/build/_deps/repo-third-party-build/grpc-repo/src/grpc/third_party/protobuf/src/google/protobuf/text_format.cc:335] Error parsing text-format inference.ModelConfig: 29:17: Expected integer, got: $
E0407 20:46:46.986752 111 model_repository_manager.cc:1309] Poll failed for model directory 'tensorrt_llm_bls': failed to read text proto from /tensorrtllm_backend/triton_model_repo/tensorrt_llm_bls/config.pbtxt
I0407 20:46:46.986787 111 server.cc:592] 
+------------------+------+
| Repository Agent | Path |
+------------------+------+
+------------------+------+
I0407 20:46:46.986800 111 server.cc:619] 
+---------+------+--------+
| Backend | Path | Config |
+---------+------+--------+
+---------+------+--------+
I0407 20:46:46.986812 111 server.cc:662] 
+-------+---------+--------+
| Model | Version | Status |
+-------+---------+--------+
+-------+---------+--------+
W0407 20:46:47.000293 112 server.cc:238] failed to enable peer access for some device pairs
[libprotobuf ERROR /tmp/tritonbuild/tritonserver/build/_deps/repo-third-party-build/grpc-repo/src/grpc/third_party/protobuf/src/google/protobuf/text_format.cc:335] Error parsing text-format inference.ModelConfig: 29:17: Expected integer, got: $
E0407 20:46:47.001457 112 model_repository_manager.cc:1309] Poll failed for model directory 'tensorrt_llm': failed to read text proto from /tensorrtllm_backend/triton_model_repo/tensorrt_llm/config.pbtxt
I0407 20:46:47.001497 112 server.cc:592] 
+------------------+------+
| Repository Agent | Path |
+------------------+------+
+------------------+------+
I0407 20:46:47.001519 112 server.cc:619] 
+---------+------+--------+
| Backend | Path | Config |
+---------+------+--------+
+---------+------+--------+
I0407 20:46:47.001530 112 server.cc:662] 
+-------+---------+--------+
| Model | Version | Status |
+-------+---------+--------+
+-------+---------+--------+
W0407 20:46:47.015524 113 server.cc:238] failed to enable peer access for some device pairs
[libprotobuf ERROR /tmp/tritonbuild/tritonserver/build/_deps/repo-third-party-build/grpc-repo/src/grpc/third_party/protobuf/src/google/protobuf/text_format.cc:335] Error parsing text-format inference.ModelConfig: 29:17: Expected integer, got: $
E0407 20:46:47.016651 113 model_repository_manager.cc:1309] Poll failed for model directory 'tensorrt_llm': failed to read text proto from /tensorrtllm_backend/triton_model_repo/tensorrt_llm/config.pbtxt
I0407 20:46:47.016690 113 server.cc:592] 
+------------------+------+
| Repository Agent | Path |
+------------------+------+
+------------------+------+
I0407 20:46:47.016711 113 server.cc:619] 
+---------+------+--------+
| Backend | Path | Config |
+---------+------+--------+
+---------+------+--------+
I0407 20:46:47.016723 113 server.cc:662] 
+-------+---------+--------+
| Model | Version | Status |
+-------+---------+--------+
+-------+---------+--------+
I0407 20:46:47.118766 114 metrics.cc:817] Collecting metrics for GPU 0: NVIDIA A10G
I0407 20:46:47.118806 114 metrics.cc:817] Collecting metrics for GPU 1: NVIDIA A10G
I0407 20:46:47.118814 114 metrics.cc:817] Collecting metrics for GPU 2: NVIDIA A10G
I0407 20:46:47.118822 114 metrics.cc:817] Collecting metrics for GPU 3: NVIDIA A10G
I0407 20:46:47.119116 114 metrics.cc:710] Collecting CPU metrics
I0407 20:46:47.119338 114 tritonserver.cc:2458] 
+----------------------------------+--------------------------------------------------------------------------------------------------------+
| Option                           | Value                                                                                                  |
+----------------------------------+--------------------------------------------------------------------------------------------------------+
| server_id                        | triton                                                                                                 |
| server_version                   | 2.39.0                                                                                                 |
| server_extensions                | classification sequence model_repository model_repository(unload_dependents) schedule_policy model_con |
|                                  | figuration system_shared_memory cuda_shared_memory binary_tensor_data parameters statistics trace logg |
|                                  | ing                                                                                                    |
| model_repository_path[0]         | /tensorrtllm_backend/triton_model_repo                                                                 |
| model_control_mode               | MODE_EXPLICIT                                                                                          |
| startup_models_0                 | tensorrt_llm                                                                                           |
| strict_model_config              | 1                                                                                                      |
| rate_limit                       | OFF                                                                                                    |
| pinned_memory_pool_byte_size     | 268435456                                                                                              |
| cuda_memory_pool_byte_size{0}    | 67108864                                                                                               |
| cuda_memory_pool_byte_size{1}    | 67108864                                                                                               |
| cuda_memory_pool_byte_size{2}    | 67108864                                                                                               |
| cuda_memory_pool_byte_size{3}    | 67108864                                                                                               |
| min_supported_compute_capability | 6.0                                                                                                    |
| strict_readiness                 | 1                                                                                                      |
| exit_timeout                     | 30                                                                                                     |
| cache_enabled                    | 0                                                                                                      |
+----------------------------------+--------------------------------------------------------------------------------------------------------+
I0407 20:46:47.119362 114 server.cc:293] Waiting for in-flight requests to complete.
I0407 20:46:47.119368 114 server.cc:309] Timeout 30: Found 0 model versions that have in-flight inferences
I0407 20:46:47.119373 114 server.cc:324] All models are stopped, unloading models
I0407 20:46:47.119380 114 server.cc:331] Timeout 30: Found 0 live models and 0 in-flight non-inference requests
I0407 20:46:47.171360 111 metrics.cc:817] Collecting metrics for GPU 0: NVIDIA A10G
I0407 20:46:47.171399 111 metrics.cc:817] Collecting metrics for GPU 1: NVIDIA A10G
I0407 20:46:47.171407 111 metrics.cc:817] Collecting metrics for GPU 2: NVIDIA A10G
I0407 20:46:47.171415 111 metrics.cc:817] Collecting metrics for GPU 3: NVIDIA A10G
I0407 20:46:47.171710 111 metrics.cc:710] Collecting CPU metrics
I0407 20:46:47.171898 111 tritonserver.cc:2458] 
+----------------------------------+--------------------------------------------------------------------------------------------------------+
| Option                           | Value                                                                                                  |
+----------------------------------+--------------------------------------------------------------------------------------------------------+
| server_id                        | triton                                                                                                 |
| server_version                   | 2.39.0                                                                                                 |
| server_extensions                | classification sequence model_repository model_repository(unload_dependents) schedule_policy model_con |
|                                  | figuration system_shared_memory cuda_shared_memory binary_tensor_data parameters statistics trace logg |
|                                  | ing                                                                                                    |
| model_repository_path[0]         | /tensorrtllm_backend/triton_model_repo                                                                 |
| model_control_mode               | MODE_NONE                                                                                              |
| strict_model_config              | 1                                                                                                      |
| rate_limit                       | OFF                                                                                                    |
| pinned_memory_pool_byte_size     | 268435456                                                                                              |
| cuda_memory_pool_byte_size{0}    | 67108864                                                                                               |
| cuda_memory_pool_byte_size{1}    | 67108864                                                                                               |
| cuda_memory_pool_byte_size{2}    | 67108864                                                                                               |
| cuda_memory_pool_byte_size{3}    | 67108864                                                                                               |
| min_supported_compute_capability | 6.0                                                                                                    |
| strict_readiness                 | 1                                                                                                      |
| exit_timeout                     | 30                                                                                                     |
| cache_enabled                    | 0                                                                                                      |
+----------------------------------+--------------------------------------------------------------------------------------------------------+
I0407 20:46:47.171919 111 server.cc:293] Waiting for in-flight requests to complete.
I0407 20:46:47.171926 111 server.cc:309] Timeout 30: Found 0 model versions that have in-flight inferences
I0407 20:46:47.171932 111 server.cc:324] All models are stopped, unloading models
I0407 20:46:47.171938 111 server.cc:331] Timeout 30: Found 0 live models and 0 in-flight non-inference requests
I0407 20:46:47.178957 112 metrics.cc:817] Collecting metrics for GPU 0: NVIDIA A10G
I0407 20:46:47.178997 112 metrics.cc:817] Collecting metrics for GPU 1: NVIDIA A10G
I0407 20:46:47.179006 112 metrics.cc:817] Collecting metrics for GPU 2: NVIDIA A10G
I0407 20:46:47.179014 112 metrics.cc:817] Collecting metrics for GPU 3: NVIDIA A10G
I0407 20:46:47.179341 112 metrics.cc:710] Collecting CPU metrics
I0407 20:46:47.179542 112 tritonserver.cc:2458] 
+----------------------------------+--------------------------------------------------------------------------------------------------------+
| Option                           | Value                                                                                                  |
+----------------------------------+--------------------------------------------------------------------------------------------------------+
| server_id                        | triton                                                                                                 |
| server_version                   | 2.39.0                                                                                                 |
| server_extensions                | classification sequence model_repository model_repository(unload_dependents) schedule_policy model_con |
|                                  | figuration system_shared_memory cuda_shared_memory binary_tensor_data parameters statistics trace logg |
|                                  | ing                                                                                                    |
| model_repository_path[0]         | /tensorrtllm_backend/triton_model_repo                                                                 |
| model_control_mode               | MODE_EXPLICIT                                                                                          |
| startup_models_0                 | tensorrt_llm                                                                                           |
| strict_model_config              | 1                                                                                                      |
| rate_limit                       | OFF                                                                                                    |
| pinned_memory_pool_byte_size     | 268435456                                                                                              |
| cuda_memory_pool_byte_size{0}    | 67108864                                                                                               |
| cuda_memory_pool_byte_size{1}    | 67108864                                                                                               |
| cuda_memory_pool_byte_size{2}    | 67108864                                                                                               |
| cuda_memory_pool_byte_size{3}    | 67108864                                                                                               |
| min_supported_compute_capability | 6.0                                                                                                    |
| strict_readiness                 | 1                                                                                                      |
| exit_timeout                     | 30                                                                                                     |
| cache_enabled                    | 0                                                                                                      |
+----------------------------------+--------------------------------------------------------------------------------------------------------+
I0407 20:46:47.179560 112 server.cc:293] Waiting for in-flight requests to complete.
I0407 20:46:47.179566 112 server.cc:309] Timeout 30: Found 0 model versions that have in-flight inferences
I0407 20:46:47.179571 112 server.cc:324] All models are stopped, unloading models
I0407 20:46:47.179576 112 server.cc:331] Timeout 30: Found 0 live models and 0 in-flight non-inference requests
error: creating server: Internal - failed to load all models
I0407 20:46:47.243237 113 metrics.cc:817] Collecting metrics for GPU 0: NVIDIA A10G
I0407 20:46:47.243272 113 metrics.cc:817] Collecting metrics for GPU 1: NVIDIA A10G
I0407 20:46:47.243280 113 metrics.cc:817] Collecting metrics for GPU 2: NVIDIA A10G
I0407 20:46:47.243288 113 metrics.cc:817] Collecting metrics for GPU 3: NVIDIA A10G
I0407 20:46:47.243567 113 metrics.cc:710] Collecting CPU metrics
I0407 20:46:47.243746 113 tritonserver.cc:2458] 
+----------------------------------+--------------------------------------------------------------------------------------------------------+
| Option                           | Value                                                                                                  |
+----------------------------------+--------------------------------------------------------------------------------------------------------+
| server_id                        | triton                                                                                                 |
| server_version                   | 2.39.0                                                                                                 |
| server_extensions                | classification sequence model_repository model_repository(unload_dependents) schedule_policy model_con |
|                                  | figuration system_shared_memory cuda_shared_memory binary_tensor_data parameters statistics trace logg |
|                                  | ing                                                                                                    |
| model_repository_path[0]         | /tensorrtllm_backend/triton_model_repo                                                                 |
| model_control_mode               | MODE_EXPLICIT                                                                                          |
| startup_models_0                 | tensorrt_llm                                                                                           |
| strict_model_config              | 1                                                                                                      |
| rate_limit                       | OFF                                                                                                    |
| pinned_memory_pool_byte_size     | 268435456                                                                                              |
| cuda_memory_pool_byte_size{0}    | 67108864                                                                                               |
| cuda_memory_pool_byte_size{1}    | 67108864                                                                                               |
| cuda_memory_pool_byte_size{2}    | 67108864                                                                                               |
| cuda_memory_pool_byte_size{3}    | 67108864                                                                                               |
| min_supported_compute_capability | 6.0                                                                                                    |
| strict_readiness                 | 1                                                                                                      |
| exit_timeout                     | 30                                                                                                     |
| cache_enabled                    | 0                                                                                                      |
+----------------------------------+--------------------------------------------------------------------------------------------------------+
I0407 20:46:47.243764 113 server.cc:293] Waiting for in-flight requests to complete.
I0407 20:46:47.243770 113 server.cc:309] Timeout 30: Found 0 model versions that have in-flight inferences
I0407 20:46:47.243776 113 server.cc:324] All models are stopped, unloading models
I0407 20:46:47.243781 113 server.cc:331] Timeout 30: Found 0 live models and 0 in-flight non-inference requests
error: creating server: Internal - failed to load all models
error: creating server: Internal - failed to load all models
error: creating server: Internal - failed to load all models
--------------------------------------------------------------------------
Primary job  terminated normally, but 1 process returned
a non-zero exit code. Per user-direction, the job has been aborted.
--------------------------------------------------------------------------
--------------------------------------------------------------------------
mpirun detected that one or more processes exited with non-zero status, thus causing
the job to be terminated. The first process to do so was:
  Process name: [[13251,1],3]
  Exit code:    1
--------------------------------------------------------------------------
byshiue commented 2 months ago

You need to setup some runtime parameters like triton_max_batch_size, max_beam_width, ... (The parameters like ${xxx}). Here is document https://github.com/triton-inference-server/tensorrtllm_backend/blob/main/docs/gemma.md#end-to-end-workflow-to-run-sp-model.