Closed Aniket-20 closed 6 months ago
@Aniket-20 Could you please provide more information as mentioned in the template below.
Description A clear and concise description of what the bug is.
Triton Information What version of Triton are you using?
Are you using the Triton container or did you build it yourself?
To Reproduce Steps to reproduce the behavior.
Describe the models (framework, inputs, outputs), ideally include the model configuration file (if using an ensemble include the model configuration file for that as well).
Expected behavior A clear and concise description of what you expected to happen.
Closing due to lack of activity. Please re-open the issue if you would like to follow up with this issue.
tritonserver --model-repository=/mnt/Modelrepo I0326 17:47:34.102845 262 pinned_memory_manager.cc:275] Pinned memory pool is created at '0x78cf8e000000' with size 268435456 I0326 17:47:34.103268 262 cuda_memory_manager.cc:107] CUDA memory pool is created on device 0 with size 67108864 I0326 17:47:34.115183 262 model_lifecycle.cc:461] loading: nllb:1 I0326 17:47:37.855465 262 python_be.cc:2362] TRITONBACKEND_ModelInstanceInitialize: nllb_0_0 (GPU device 0) I0326 17:47:47.538250 262 model_lifecycle.cc:827] successfully loaded 'nllb' I0326 17:47:47.541176 262 server.cc:606] +------------------+------+ | Repository Agent | Path | +------------------+------+ +------------------+------+
I0326 17:47:47.541401 262 server.cc:633] +---------+-------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------+ | Backend | Path | Config | +---------+-------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------+ | python | /opt/tritonserver/backends/python/libtriton_python.so | {"cmdline":{"auto-complete-config":"true","backend-directory":"/opt/tritonserver/backends","min-compute-capability":"6.0 | | | | 00000","default-max-batch-size":"4"}} | +---------+-------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------+
I0326 17:47:47.541822 262 server.cc:676] +-------+---------+--------+ | Model | Version | Status | +-------+---------+--------+ | nllb | 1 | READY | +-------+---------+--------+
I0326 17:47:47.627023 262 metrics.cc:877] Collecting metrics for GPU 0: NVIDIA GeForce GTX 1650 I0326 17:47:47.629461 262 metrics.cc:770] Collecting CPU metrics I0326 17:47:47.630979 262 tritonserver.cc:2498] +----------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------+ | Option | Value | +----------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------+ | server_id | triton | | server_version | 2.42.0 | | server_extensions | classification sequence model_repository model_repository(unload_dependents) schedule_policy model_configuration system_shared_memory cuda_shared_memo | | | ry binary_tensor_data parameters statistics trace logging | | model_repository_path[0] | /mnt/Modelrepo | | model_control_mode | MODE_NONE | | strict_model_config | 0 | | rate_limit | OFF | | pinned_memory_pool_byte_size | 268435456 | | cuda_memory_pool_byte_size{0} | 67108864 | | min_supported_compute_capability | 6.0 | | strict_readiness | 1 | | exit_timeout | 30 | | cache_enabled | 0 | +----------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------+
I0326 17:47:47.648302 262 grpc_server.cc:2519] Started GRPCInferenceService at 0.0.0.0:8001 I0326 17:47:47.649289 262 http_server.cc:4623] Started HTTPService at 0.0.0.0:8000 I0326 17:47:47.692081 262 http_server.cc:315] Started Metrics Service at 0.0.0.0:8002 W0326 17:47:48.633396 262 metrics.cc:631] Unable to get power limit for GPU 0. Status:Success, value:0.000000 W0326 17:47:49.638903 262 metrics.cc:631] Unable to get power limit for GPU 0. Status:Success, value:0.000000 W0326 17:47:50.640772 262 metrics.cc:631] Unable to get power limit for GPU 0. Status:Success, value:0.000000
Output for "nvidia-smi -q -d POWER":- ==============NVSMI LOG==============
Timestamp : Tue Mar 26 23:13:36 2024 Driver Version : 550.54.14 CUDA Version : 12.4
Attached GPUs : 1 GPU 00000000:01:00.0 GPU Power Readings Power Draw : 4.19 W Current Power Limit : 50.00 W Requested Power Limit : N/A Default Power Limit : 50.00 W Min Power Limit : 1.00 W Max Power Limit : 60.00 W Power Samples Duration : Not Found Number of Samples : Not Found Max : Not Found Min : Not Found Avg : Not Found GPU Memory Power Readings Power Draw : N/A Module Power Readings Power Draw : N/A Current Power Limit : N/A Requested Power Limit : N/A Default Power Limit : N/A Min Power Limit : N/A Max Power Limit : N/A