triton-inference-server / server

The Triton Inference Server provides an optimized cloud and edge inferencing solution.
https://docs.nvidia.com/deeplearning/triton-inference-server/user-guide/docs/index.html
BSD 3-Clause "New" or "Revised" License
8.12k stars 1.46k forks source link

How to send binary data (audio file) in perf_analyzer? #6701

Open kzelias opened 9 months ago

kzelias commented 9 months ago

Description (same issue https://github.com/triton-inference-server/server/issues/3206)

I have a triton model that accepts a binary string. I want to send a wav file, if I do it through the client - everything works, if through the perf analyzer - it does not work.

Triton Information

Triton: nvcr.io/nvidia/tritonserver:23.01-py3 Triton SDK for perf analyzer: nvcr.io/nvidia/tritonserver:23.07-py3-sdk

To Reproduce

config.pbtxt

name: "conformer_full_model"
backend: "python"

input [
  {
    name: "IN"
    data_type: TYPE_STRING 
    dims: [1]
  }
]

output [
  {
    name: "OUT"
    data_type: TYPE_STRING
    dims: [1]
  }
]

instance_group [
  { 
    count: 1
    kind: KIND_GPU 
  }
]

If I'm trying to send a wav file:

perf_analyzer -m conformer_full_model --input-data data/ -u audio-triton.ap-triton.svc:8000
error: Failed to init manager inputs: provided data for input IN has 29 elements, expect 1

If I'm trying to send a binary string of a wav file: Generated as follows

with open("data/in.wav", "rb") as content_file:
    content = content_file.read()
with open('IN', 'w') as f:
    f.write(str(content))
# RIFFx\x15\x00\x00WAVEfmt \x10\x00\x00\x00\x01\x00\x01\x00@\x1f...
perf_analyzer -m conformer_full_model --input-data data/ -u audio-triton.ap-triton.svc:8000

The string is forwarded, but after in_0.as_numpy()[0] it looks like b'RIFFx\\x15\\x00\\x00WAVEfmt \\x10\\x00\\x00\\x00\\x01\\x00\\x01\\x00@\\x1f...'. But it should look like this b'RIFFx\x15\x00\x00WAVEfmt \x10\x00\x00\x00\x01\x00\x01\x00@\x1f

client.py is working

import tritonclient.grpc as grpcclient
import numpy as np
triton_client = grpcclient.InferenceServerClient(url="audio-triton.ap-triton.svc:8001")
model_name = 'conformer_full_model'
inputs = []
outputs = []
with open("data/in.wav", 'rb') as content_file:
    content = content_file.read()
input0_data = np.asarray(content)
inputs.append(grpcclient.InferInput('IN', [1], "BYTES"))
inputs[0].set_data_from_numpy(input0_data.reshape([1]))
outputs.append(grpcclient.InferRequestedOutput('OUT'))
results = triton_client.infer(
        model_name=model_name,
        inputs=inputs,
        outputs=outputs)
result = results.as_numpy('OUT')
oandreeva-nv commented 9 months ago

@matthewkotila, by any chance would you happen to know the solution for this issue?

dyastremsky commented 7 months ago

CC: @matthewkotila

lucidyan commented 6 months ago

Experienced the same issue of inability to profile my model with native tools. @dyastremsky Any ideas where it could be answered?

dyastremsky commented 6 months ago

The team working on Tools who would know more (like @matthewkotila) is quite occupied at the moment, so there will be a delay in response.

I am not familiar with the specific requirements of PA input files, especially in an audio context, but I did see this unofficial solution available that may be helpful in the meantime. Instructions for running these are here. This solution may also provide some direction, though note that it's for older versions of Triton.

lucidyan commented 6 months ago

Thanks for the information!

Looks like the library examples use JSON to send WAV PCM data instead of the more efficient raw binary WAV format. Not ideal since it requires changing Triton model signatures, but could work as a temporary fix if there aren't better options right now.

dyastremsky commented 6 months ago

Thanks for responding. Some more information for this use case here as well: https://github.com/triton-inference-server/server/issues/3206

MatthieuToulemont commented 3 months ago

I have the same issue for images, I usually send the images as encoded bytes to Triton and I would like to be able to use the perf analyzer to benchmark my pipelines.

kzelias commented 2 months ago

There is a solution for a single file. Take the .wav file, rename it to the name of our input. For example IN for config above. And put it in an empty folder data. Find out shape or take any. Then try perf_analyzer -m {MODEL_NAME} -b 1 --input-data data/ --shape IN:{SHAPE} -u {podname.namespace.svc}:8000

After that you may get an error with shape. error: Failed to init manager inputs: provided data for input IN has 5255 elements, expect 29 You'll just have to change the shape.

But I still don't understand how to get this to work on multiple files.

matthewkotila commented 1 month ago

@kzelias: ... But I still don't understand how to get this to work on multiple files.

Could you elaborate? If your model has multiple inputs that you want to supplied binary data for, you should be able to include one file per input in the data/ directory, and Perf Analyzer will use each respective input binary file as the data for those inputs when sending inference requests to the model.

kzelias commented 1 month ago

@matthewkotila, It's not about multiple inputs. It's about multiple requests. With the --input-data parameter, I can only send 1 file per input from the data/ folder. But I want to send many different files iteratively.

Like here. https://docs.nvidia.com/deeplearning/triton-inference-server/archives/triton-inference-server-2280/user-guide/docs/user_guide/perf_analyzer.html#real-input-data

matthewkotila commented 1 month ago

Unfortunately we don't support supplying binary files for more than one request, but you should be able to convert the binary data into b64 representation and include that in an input data JSON supplied to PA. That will allow you to supply more than one request's worth of input data.

I agree, what you've request would be good to have--I've noted the feature request but don't have a timeline of when we would be able to work on it/deliver it.

kzelias commented 1 month ago

@matthewkotila If I use b64 + json, I will need to change the logic of the triton service, right? Would need to decode b64.

MatthieuToulemont commented 1 month ago

If I use b64 + json, I will need to change the logic of the triton service, right? Would need to decode b64.

I am doing this for encoded images for benchmarking, but in production I sent bytes directly. The cost of decoding b64 is not that big so the benchmark should not be too far off

matthewkotila commented 1 month ago

@kzelias: @matthewkotila If I use b64 + json, I will need to change the logic of the triton service, right? Would need to decode b64.

The decoding of the b64 data happens inside Perf Analyzer (the client) before sending to the server. You wouldn't have to change anything regarding how you set up your triton service. But yes, it is client-side computational time that theoretically could impact PA's ability to maintain concurrency or a desired request rate (but unlikely as above person mentioned), and could be lessened with the feature request you made.