Open kzelias opened 9 months ago
@matthewkotila, by any chance would you happen to know the solution for this issue?
CC: @matthewkotila
Experienced the same issue of inability to profile my model with native tools. @dyastremsky Any ideas where it could be answered?
The team working on Tools who would know more (like @matthewkotila) is quite occupied at the moment, so there will be a delay in response.
I am not familiar with the specific requirements of PA input files, especially in an audio context, but I did see this unofficial solution available that may be helpful in the meantime. Instructions for running these are here. This solution may also provide some direction, though note that it's for older versions of Triton.
Thanks for the information!
Looks like the library examples use JSON to send WAV PCM data instead of the more efficient raw binary WAV format. Not ideal since it requires changing Triton model signatures, but could work as a temporary fix if there aren't better options right now.
Thanks for responding. Some more information for this use case here as well: https://github.com/triton-inference-server/server/issues/3206
I have the same issue for images, I usually send the images as encoded bytes to Triton and I would like to be able to use the perf analyzer to benchmark my pipelines.
There is a solution for a single file.
Take the .wav
file, rename it to the name of our input. For example IN
for config above. And put it in an empty folder data
. Find out shape or take any.
Then try
perf_analyzer -m {MODEL_NAME} -b 1 --input-data data/ --shape IN:{SHAPE} -u {podname.namespace.svc}:8000
After that you may get an error with shape.
error: Failed to init manager inputs: provided data for input IN has 5255 elements, expect 29
You'll just have to change the shape.
But I still don't understand how to get this to work on multiple files.
@kzelias: ... But I still don't understand how to get this to work on multiple files.
Could you elaborate? If your model has multiple inputs that you want to supplied binary data for, you should be able to include one file per input in the data/
directory, and Perf Analyzer will use each respective input binary file as the data for those inputs when sending inference requests to the model.
@matthewkotila, It's not about multiple inputs. It's about multiple requests.
With the --input-data parameter, I can only send 1 file per input from the data/
folder.
But I want to send many different files iteratively.
Unfortunately we don't support supplying binary files for more than one request, but you should be able to convert the binary data into b64 representation and include that in an input data JSON supplied to PA. That will allow you to supply more than one request's worth of input data.
I agree, what you've request would be good to have--I've noted the feature request but don't have a timeline of when we would be able to work on it/deliver it.
@matthewkotila If I use b64 + json, I will need to change the logic of the triton service, right? Would need to decode b64.
If I use b64 + json, I will need to change the logic of the triton service, right? Would need to decode b64.
I am doing this for encoded images for benchmarking, but in production I sent bytes directly. The cost of decoding b64 is not that big so the benchmark should not be too far off
@kzelias: @matthewkotila If I use b64 + json, I will need to change the logic of the triton service, right? Would need to decode b64.
The decoding of the b64 data happens inside Perf Analyzer (the client) before sending to the server. You wouldn't have to change anything regarding how you set up your triton service. But yes, it is client-side computational time that theoretically could impact PA's ability to maintain concurrency or a desired request rate (but unlikely as above person mentioned), and could be lessened with the feature request you made.
Description (same issue https://github.com/triton-inference-server/server/issues/3206)
I have a triton model that accepts a binary string. I want to send a wav file, if I do it through the client - everything works, if through the perf analyzer - it does not work.
Triton Information
Triton:
nvcr.io/nvidia/tritonserver:23.01-py3
Triton SDK for perf analyzer:nvcr.io/nvidia/tritonserver:23.07-py3-sdk
To Reproduce
config.pbtxt
If I'm trying to send a
wav
file:If I'm trying to send a binary string of a
wav
file: Generated as followsThe string is forwarded, but after
in_0.as_numpy()[0]
it looks likeb'RIFFx\\x15\\x00\\x00WAVEfmt \\x10\\x00\\x00\\x00\\x01\\x00\\x01\\x00@\\x1f...'
. But it should look like thisb'RIFFx\x15\x00\x00WAVEfmt \x10\x00\x00\x00\x01\x00\x01\x00@\x1f
client.py is working