triton-inference-server / model_navigator

Triton Model Navigator is an inference toolkit designed for optimizing and deploying Deep Learning models with a focus on NVIDIA GPUs.
https://triton-inference-server.github.io/model_navigator/
Apache License 2.0
176 stars 24 forks source link

Optimize API throwing "sh.CommandNotFound: tritonserver" #17

Closed macrosend-fukumoto closed 1 year ago

macrosend-fukumoto commented 1 year ago

Environment

Description

Hi, I am trying to use the "optimize" API but I am getting the following error.

root@:/home/ubuntu/model_navigator# model-navigator optimize bert.nav
2023-01-27 07:05:03 - INFO - model_navigator.log: optimize args:
2023-01-27 07:05:03 - INFO - model_navigator.log:       model_name = my_model
2023-01-27 07:05:03 - INFO - model_navigator.log:       model_path = /home/ubuntu/model_navigator/navigator_workspace/.input_data/input_model/torchscript-trace/model.pt
2023-01-27 07:05:03 - INFO - model_navigator.log:       model_format = torchscript
2023-01-27 07:05:03 - INFO - model_navigator.log:       model_version = 1
2023-01-27 07:05:03 - INFO - model_navigator.log:       target_formats = ['tf-trt', 'tf-savedmodel', 'onnx', 'trt', 'torchscript', 'torch-trt']
2023-01-27 07:05:03 - INFO - model_navigator.log:       onnx_opsets = [14]
2023-01-27 07:05:03 - INFO - model_navigator.log:       tensorrt_precisions = ['fp32', 'fp16']
2023-01-27 07:05:03 - INFO - model_navigator.log:       tensorrt_precisions_mode = hierarchy
2023-01-27 07:05:03 - INFO - model_navigator.log:       tensorrt_explicit_precision = False
2023-01-27 07:05:03 - INFO - model_navigator.log:       tensorrt_sparse_weights = False
2023-01-27 07:05:03 - INFO - model_navigator.log:       tensorrt_max_workspace_size = 4294967296
2023-01-27 07:05:03 - INFO - model_navigator.log:       atol = {'output__0': 0.23096442222595215}
2023-01-27 07:05:03 - INFO - model_navigator.log:       rtol = {'output__0': 0.09238576889038086}
2023-01-27 07:05:03 - INFO - model_navigator.log:       inputs = {'input__0': {'name': 'input__0', 'shape': [-1, 8], 'dtype': 'int64', 'optional': False}, 'input__1': {'name': 'input__1', 'shape': [-1, 8], 'dtype': 'int64', 'optional': False}}
2023-01-27 07:05:03 - INFO - model_navigator.log:       outputs = {'output__0': {'name': 'output__0', 'shape': [-1, 2], 'dtype': 'float32', 'optional': False}}
2023-01-27 07:05:03 - INFO - model_navigator.log:       min_shapes = None
2023-01-27 07:05:03 - INFO - model_navigator.log:       opt_shapes = None
2023-01-27 07:05:03 - INFO - model_navigator.log:       max_shapes = None
2023-01-27 07:05:03 - INFO - model_navigator.log:       value_ranges = None
2023-01-27 07:05:03 - INFO - model_navigator.log:       dtypes = None
2023-01-27 07:05:03 - INFO - model_navigator.log:       engine_count_per_device = {}
2023-01-27 07:05:03 - INFO - model_navigator.log:       triton_backend_parameters = {}
2023-01-27 07:05:03 - INFO - model_navigator.log:       triton_launch_mode = local
2023-01-27 07:05:03 - INFO - model_navigator.log:       triton_server_path = tritonserver
2023-01-27 07:05:03 - INFO - model_navigator.log:       config_search_max_batch_size = 128
2023-01-27 07:05:03 - INFO - model_navigator.log:       config_search_max_concurrency = 1024
2023-01-27 07:05:03 - INFO - model_navigator.log:       config_search_max_instance_count = 5
2023-01-27 07:05:03 - INFO - model_navigator.log:       config_search_concurrency = []
2023-01-27 07:05:03 - INFO - model_navigator.log:       config_search_batch_sizes = []
2023-01-27 07:05:03 - INFO - model_navigator.log:       config_search_instance_counts = {}
2023-01-27 07:05:03 - INFO - model_navigator.log:       config_search_max_batch_sizes = []
2023-01-27 07:05:03 - INFO - model_navigator.log:       config_search_preferred_batch_sizes = []
2023-01-27 07:05:03 - INFO - model_navigator.log:       config_search_backend_parameters = {}
2023-01-27 07:05:03 - INFO - model_navigator.log:       config_search_early_exit_enable = False
2023-01-27 07:05:03 - INFO - model_navigator.log:       top_n_configs = 3
2023-01-27 07:05:03 - INFO - model_navigator.log:       objectives = {'perf_throughput': 10}
2023-01-27 07:05:03 - INFO - model_navigator.log:       max_latency_ms = None
2023-01-27 07:05:03 - INFO - model_navigator.log:       min_throughput = 0
2023-01-27 07:05:03 - INFO - model_navigator.log:       max_gpu_usage_mb = None
2023-01-27 07:05:03 - INFO - model_navigator.log:       perf_analyzer_timeout = 600
2023-01-27 07:05:03 - INFO - model_navigator.log:       perf_analyzer_path = perf_analyzer
2023-01-27 07:05:03 - INFO - model_navigator.log:       perf_measurement_mode = count_windows
2023-01-27 07:05:03 - INFO - model_navigator.log:       perf_measurement_request_count = 50
2023-01-27 07:05:03 - INFO - model_navigator.log:       perf_measurement_interval = 5000
2023-01-27 07:05:03 - INFO - model_navigator.log:       perf_measurement_shared_memory = none
2023-01-27 07:05:03 - INFO - model_navigator.log:       perf_measurement_output_shared_memory_size = 102400
2023-01-27 07:05:03 - INFO - model_navigator.log:       workspace_path = navigator_workspace
2023-01-27 07:05:03 - INFO - model_navigator.log:       override_workspace = False
2023-01-27 07:05:03 - INFO - model_navigator.log:       override_conversion_container = False
2023-01-27 07:05:03 - INFO - model_navigator.log:       framework_docker_image = nvcr.io/nvidia/pytorch:22.10-py3
2023-01-27 07:05:03 - INFO - model_navigator.log:       triton_docker_image = nvcr.io/nvidia/tritonserver:22.10-py3
2023-01-27 07:05:03 - INFO - model_navigator.log:       gpus = ('all',)
2023-01-27 07:05:03 - INFO - model_navigator.log:       verbose = False
2023-01-27 07:05:03 - INFO - model_navigator.utils.docker: Run docker container with image model_navigator_converter:22.10-py3; using workdir: /home/ubuntu/model_navigator
2023-01-27 07:05:06 - INFO - model_navigator.converter.transformers: Running command copy on /home/ubuntu/model_navigator/navigator_workspace/.input_data/input_model/torchscript-trace/model.pt
2023-01-27 07:05:06 - INFO - model_navigator.converter.transformers: Running command annotation on /home/ubuntu/model_navigator/navigator_workspace/converted/model.pt
2023-01-27 07:05:06 - INFO - model_navigator.converter.transformers: Saving annotations to /home/ubuntu/model_navigator/navigator_workspace/converted/model.pt.yaml
2023-01-27 07:05:06 - INFO - pyt.transformers: ts2onnx command started.
2023-01-27 07:05:17 - INFO - pyt.transformers: ts2onnx command succeed.
2023-01-27 07:05:18 - INFO - polygraphy.transformers: Polygraphy onnx2trt started.
2023-01-27 07:05:18 - WARNING - polygraphy.transformers: This conversion should be done on target GPU platform
2023-01-27 07:06:57 - INFO - polygraphy.transformers: onnx2trt command succeed.
2023-01-27 07:06:57 - INFO - polygraphy.transformers: Polygraphy onnx2trt succeeded.
2023-01-27 07:06:57 - INFO - polygraphy.transformers: Polygraphy onnx2trt started.
2023-01-27 07:06:57 - WARNING - polygraphy.transformers: This conversion should be done on target GPU platform
2023-01-27 07:25:40 - INFO - polygraphy.transformers: onnx2trt command succeed.
[I] Loading inference results from /home/ubuntu/model_navigator/navigator_workspace/converted/model-ts2onnx_op14-polygraphyonnx2trt_fp16_mh.plan.comparator_outputs.json
[I] Loading inference results from /home/ubuntu/model_navigator/navigator_workspace/converted/model-ts2onnx_op14-polygraphyonnx2trt_fp16_mh.plan.comparator_outputs.json
[I] Loading inference results from /home/ubuntu/model_navigator/navigator_workspace/converted/model-ts2onnx_op14-polygraphyonnx2trt_fp16_mh.plan.comparator_outputs.json
2023-01-27 07:25:40 - WARNING - polygraphy.transformers: Polygraphy onnx2trt conversion failed. Details can be found in logfile: /home/ubuntu/model_navigator/navigator_workspace/converted/model-ts2onnx_op14-polygraphyonnx2trt_fp16_mh.plan.log
2023-01-27 07:25:40 - INFO - model_navigator.converter.torch_tensorrt: model_navigator.converter.torch_tensorrt command started.
2023-01-27 07:25:40 - WARNING - model_navigator.converter.torch_tensorrt: This conversion should be done on target GPU platform
2023-01-27 07:26:10 - INFO - model_navigator.converter.torch_tensorrt: model_navigator.converter.torch_tensorrt command succeeded.
2023-01-27 07:26:10 - INFO - model_navigator.converter.torch_tensorrt: model_navigator.converter.torch_tensorrt command started.
2023-01-27 07:26:10 - WARNING - model_navigator.converter.torch_tensorrt: This conversion should be done on target GPU platform
2023-01-27 07:27:19 - INFO - model_navigator.converter.torch_tensorrt: model_navigator.converter.torch_tensorrt command succeeded.
2023-01-27 07:27:27 - INFO - optimize: Running Triton Model Configurator for converted models
2023-01-27 07:27:27 - INFO - optimize:  - my_model.ts2onnx_op14
2023-01-27 07:27:27 - INFO - optimize:  - my_model.ts2onnx_op14-polygraphyonnx2trt_fp32_mh
2023-01-27 07:27:27 - INFO - optimize:  - my_model
2023-01-27 07:27:27 - INFO - optimize:  - my_model.torch_tensorrt_module_precisionTensorRTPrecision.FP32
2023-01-27 07:27:27 - INFO - optimize:  - my_model.torch_tensorrt_module_precisionTensorRTPrecision.FP16
2023-01-27 07:27:27 - INFO - optimize: Running triton model configuration variants generation for my_model.ts2onnx_op14
2023-01-27 07:27:27 - INFO - optimize: Generated model variant my_model.ts2onnx_op14 for Triton evaluation.
Traceback (most recent call last):
  File "/opt/conda/bin/model-navigator", line 8, in <module>
    sys.exit(main())
  File "/opt/conda/lib/python3.8/site-packages/model_navigator/cli/main.py", line 53, in main
    cli(max_content_width=160)
  File "/opt/conda/lib/python3.8/site-packages/click/core.py", line 1130, in __call__
    return self.main(*args, **kwargs)
  File "/opt/conda/lib/python3.8/site-packages/click/core.py", line 1055, in main
    rv = self.invoke(ctx)
  File "/opt/conda/lib/python3.8/site-packages/click/core.py", line 1657, in invoke
    return _process_result(sub_ctx.command.invoke(sub_ctx))
  File "/opt/conda/lib/python3.8/site-packages/click/core.py", line 1404, in invoke
    return ctx.invoke(self.callback, **ctx.params)
  File "/opt/conda/lib/python3.8/site-packages/click/core.py", line 760, in invoke
    return __callback(*args, **kwargs)
  File "/opt/conda/lib/python3.8/site-packages/click/decorators.py", line 26, in new_func
    return f(get_current_context(), *args, **kwargs)
  File "/opt/conda/lib/python3.8/site-packages/model_navigator/cli/optimize.py", line 235, in optimize_cmd
    config_results = _configure_models_on_triton(
  File "/opt/conda/lib/python3.8/site-packages/model_navigator/cli/optimize.py", line 445, in _configure_models_on_triton
    triton_server.start()
  File "/opt/conda/lib/python3.8/site-packages/model_navigator/triton/server/server_local.py", line 71, in start
    tritonserver_cmd = sh.Command(tritonserver_cmd)
  File "/opt/conda/lib/python3.8/site-packages/sh.py", line 1310, in __init__
    raise CommandNotFound(path)
sh.CommandNotFound: tritonserver

Steps To Reproduce

  1. prepare docker file for model navigator. Dockerfile

    FROM nvcr.io/nvidia/pytorch:22.10-py3
    ENV DEBIAN_FRONTEND=noninteractive
    
    # WAR for PEP660
    RUN pip install --no-cache-dir --upgrade pip==21.2.4 setuptools==57.4.0
    RUN pip install janome fugashi ipadic
    RUN pip install --extra-index-url https://pypi.ngc.nvidia.com git+https://github.com/triton-inference-server/model_navigator.git@v0.3.7#egg=model-navigator[pyt,huggingface,cli] --upgrade
    
    ENTRYPOINT []
  2. Build
    docker build -f Dockerfile -t model-navigator .
  3. Run container
    docker run -it --rm \
    --ipc=host \
    --gpus 1 \
    -v /var/run/docker.sock:/var/run/docker.sock \
    -v /home/ubuntu/triton/triton-inference-server/docs/examples/model_repository:/home/ubuntu/triton/triton-inference-server/docs/examples/model_repository \
    -v /home/ubuntu/model_navigator:/home/ubuntu/model_navigator \
    -w /home/ubuntu/model_navigator \
    --net host \
    --name model-navigator \
    model-navigator /bin/bash

    I didn't understand which directory I'm supposed to specify for "model-catalog", so I tried the following but all got the same error.

    • Skipping this line
    • [-v /home/ubuntu/models:/home/ubuntu/models] /models is an empty directory
    • [-v /home/ubuntu/triton/triton-inference-server/docs/examples/model_repository:/home/ubuntu/triton/triton-inference-server/docs/examples/model_repository] path to model_repository
  4. Use Model Navigator's nav.torch.export API to create .nav file from pytorch BERT model
  5. Run Optimize using .nav file previously created
    model-navigator optimize bert.nav

    Then I get the error I've mentioned above.

zhaozhiming37 commented 1 year ago

I have the same issue, any updates?

jkosek commented 1 year ago

The model-navigator optimize step needs to be executed on Triton container. You would need to build container based on nvcr.io/nvidia/tritonserver:22.10-py3 image.

The error indicate that tritonserver binary is missing in the container.

zhaozhiming37 commented 1 year ago

@jkosek I see, thanks!

jkosek commented 1 year ago

@zhaozhiming37 you may want to switch to new flow introduced in 0.4.0 version:

macrosend-fukumoto commented 1 year ago

@jkosek Excuse me for the late reply, and thank you for your suggestion. I will try out model navigator's new flow