Open AshwinAmbal opened 1 month ago
On quick study of the stack trace, it could be related to a lower context timeout as well (10 Milliseconds) which could be causing this. Seen above as Context::IsCancelled
. Will run some experiments to confirm the behavior.
I've confirmed that the issue was request cancellation. In production, we have varying timeouts per Inference Request. One particular set of requests had the timeout set in the range of [1-4] ms for end to end inference. This caused the segmentation fault and increasing it resolved the issue.
I was also reading on this and it seems request cancellation feature is still under development and is only currently supported for gRPC Python as seen here and here. We use gRPC Golang on the contrary.
@tanmayv25 this may be of interest to you as I see you have already worked on this part of the code here.
cc: @Tabrizian @dyastremsky @rmccorm4 as well
Let me know if you need any more details here.
For now, we are averting this by creating a Goroutine with a timeout and not providing a timeout for the inference requests itself.
Thanks
Thanks @AshwinAmbal for digging into this and sharing results of your experimentation.
So, if the timeout value is very small then you were running into this segfault? You are not sending request cancellation explicitly from the client right? Would you mind sharing your model execution time and rest of the latency breakdown?
Can you update the title of this issue to reflect the current issue?
Hi @tanmayv25,
So, if the timeout value is very small then you were running into this segfault? You are not sending request cancellation explicitly from the client right?
Yes. Low context timeouts sent from client using gRPC causes the segfault. We were sending request cancellation by setting the context timeout like done here except that at times our context timeouts can range between 1 ms - 4 ms which causes the Segfault.
Hence, to work around this issue we have created go routines which send the inference request to Triton with a high context timeout while the Goroutine itself has the timeout we expect for the request. In this case, if the timeout has been reached (1 ms - 4 ms), the main routine will return without waiting for the Goroutine to finish while the Goroutine itself will only complete after the Inference response is received (from Triton).
For example, pseudo code below for main routine is as follows:
func getPrediction() {
resChan := make(chan *ResultType, size)
go func(client grpcInferenceClient, modelName string, modelVersion string) {
res := client.ModelInfer(context.Background(), &msg.ModelInferRequest) // <===== High Timeout or no timeout
resChan <- res
}()
t := time.NewTimer(timeout) // <===== GoRoutine Timeout of 1 - 4 ms
for {
select {
case r := <-resChan:
// process result of model inference and return
case <-t.C:
return nil, error("triton inference time out")
}
}
}
Please also note that we are using Triton with CPU only for inference at this point.
Would you mind sharing your model execution time and rest of the latency breakdown?
The Average Inference Request Duration for the model is 1.04 ms
as reported by Triton (nv_inference_request_duration_us / nv_inference_count
).
The E2E Inference Request Duration reported by the client for this particular model [including network RTT] is as follows:
Avg: 1.81 ms
p50: 1.74 ms,
p95: 2.77 ms,
p99: 3.13 ms
Can you update the title of this issue to reflect the current issue?
I believe the issue is Request Cancellation Timeout being low. I will update the title accordingly.
Let me know if you need any more details.
Thanks
Hi @AshwinAmbal, can you reproduce this issue with the 24.06 release? I think this change from @oandreeva-nv may possibly help the issue you're observing: https://github.com/triton-inference-server/server/pull/7325.
@rmccorm4 thanks for the response. We'll have a look and publish our findings here.
Hi @rmccorm4 I reproduced the issue with the 24.06 release with the new tfdf library of 1.9.1, Triton crashed with Signal (11) and Signal(6) when the timeout was 1 ms, It functioned well with bigger timeouts. These are the logs:
{"log":"Signal (6) received.","stream":"stderr","time":"2024-07-12T21:01:04.293670304Z"}
{"log":"Signal (11) received.","stream":"stderr","time":"2024-07-12T20:56:41.876752986Z"} and SegFault(6).
@AshwinAmbal, @Estevefact, if possible, could you please share the issue reproduction model
or model generation script
, config.pbtxt
, and client
? This will help us quickly reproduce and investigate the issue.
Thank you.
@pskiran1 I've attached the client code and config.pbtxt in the issue description already. I have also mentioned about the latency to @tanmayv25 above
About the model, we believe it isn't model dependent at this time and can be reproduced with any ML model hosted in Triton which has a low context cancellation timeout. Unfortunately, due to privacy reasons, we will not be able to share the trained model artifact at this time and it may be a lengthy process to get the approval from our end.
The only difference from a normal client is that we sometimes set the context cancellation as low as 1 ms and we notice the segfault crash issue happen when this is done.
Let me know if you need any more details to help reproduce this on your end. I've also attached the debug trace from GDB for your perusal in the issue description as well
@pskiran1 I've attached the client code and config.pbtxt in the issue description already. I have also mentioned about the latency to @tanmayv25 above
The only difference from a normal client is that we sometimes set the context cancellation as low as 1 ms and we notice the segfault crash issue happen when this is done.
Let me know if you need any more details to help reproduce this on your end. I've also attached the debug trace from GDB for your perusal in the issue description as well
Hi @AshwinAmbal,
I have created the sample issue reproduction model and client (Python gRPC) using the information you provided. When I executed the client with a very low client_timeout
, it worked fine as expected, and I was unable to reproduce the segfault. Please let us know if we are missing something here.
Hi @AshwinAmbal,
I have created the sample issue reproduction model and client (Python gRPC) using the information you provided. When I executed the client with a very low
client_timeout
, it worked fine as expected, and I was unable to reproduce the segfault. Please let us know if we are missing something here.
@pskiran1 I believe there is a difference between the Python gRPC client and Golang gRPC client. We use the Golang client with gRPC. Can you try reproducing the issue with the code (Golang) given by us?
It might also be worth running Triton server on remote rather than localhost as there may not be much network latency when running it in local
@pskiran1 I believe there is a difference between the Python gRPC client and Golang gRPC client. We use the Golang client with gRPC. Can you try reproducing the issue with the code (Golang) given by us?
@AshwinAmbal, Ideally since the segfault is happening on the server side, we should be able to reproduce the issue using python client as well. However, I attempted to reproduce the error using a Go client as well, but unfortunately, I was unable to reproduce the error. I am currently investigating how we can reproduce this issue and analyzing the backtrace logs that were shared. Please feel free to let us know if we are missing something. If you can provide a minimal issue reproduction, that would be greatly appreciated. Also, I am using remote server and client side 4000 milli seconds as time out.
FLAGS: {global_dnn 10.117.3.165:8001}
2024/07/22 16:00:03 Error processing InferRequest: rpc error: code = DeadlineExceeded desc = context deadline exceeded
exit status 1
Note: In go client, I have used only float32 data type inputs.
CC: @tanmayv25
@pskiran1 I'll look into the code you shared and see if I can find something. But from first glance your timeout is not small enough (4000 ms). Can you try setting it to a lower timeout between 1 ms - 4 ms and hit Triton with multiple similar requests at the same time?
@AshwinAmbal, sorry for the typo, I was trying with 4ms timeout and also multiple requests.
@pskiran1 can you set the timeout lower to 1 ms instead and test one last time before I dig into this?
@pskiran1 can you set the timeout lower to 1 ms instead and test one last time before I dig into this?
Yes, @AshwinAmbal, I verified with 1ms. However, if we keep it 1ms, the request times out before reaching the server. On the same host, it reaches the server, but the segfault is not reproducible.
@AshwinAmbal, yes we tried with 1ms and in case of 1ms due to network latency request was unable to reach to the server.
@pskiran1 Can you send requests in a loop with an increasing value of timeout? From 1ms to 4 or 5ms with an increment of 0.1ms? Just to see if the segfault occurs for a special case. Also I would advice looking at any known issue with grpc which might describe the known issue.
Description We use gRPC to query Triton for Model Ready, Model Metadata and Model Inference Requests. When running the Triton server for a sustained period of time, we get Segfaults unexpectedly [Signal 11 received]. The trace of the segfault is attached with this issue but the time when it occurs cannot be predicted and happens across our servers at irregular intervals.
Triton Information What version of Triton are you using? Version 24.05 I also built my CPU only version with Debug symbols and reproduced the same issue as well
Are you using the Triton container or did you build it yourself? I can reproduce the issue in Triton Container from NGC and in the custom build that I've done as well
To Reproduce Steps to reproduce the behavior.
Description of node
c6i.16xlarge
instance from AWSError occurred
GDB Trace by building Debug Container as described here
Describe the models (framework, inputs, outputs), ideally include the model configuration file (if using an ensemble include the model configuration file for that as well).
Model Configuration seen here. Model is trained with Tensorflow 2.13 and is a saved_model.pb artifact
Expected behavior No Segfaults or server crashes
As we can see, the issue starts mainly with the grpc
InferHandlerState
and goes deeper into the Triton code which I am trying to study myself. I thought I will raise this issue here as it seems major and would like to get more eyes on this from the Triton community.Please let me know if you need any more information from my end as well.
Thanks