Closed Shivashankarar closed 6 months ago
Are you saying that you want a pipeline that using Triton to do 2 inference step: CRAFT detection then Recognition? Both inside Triton?
Are you saying that you want a pipeline that using Triton to do 2 inference step: CRAFT detection then Recognition? Both inside Triton?
@k9ele7en Yes! please if possible ?
You can refer to this. Though I have not tried it before, but that Ensemble feature will suit your need, I think. The key is the guidance config on how outputs of source model feed into input of next model. Hope you find something useful. https://developer.nvidia.com/blog/serving-ml-model-pipelines-on-nvidia-triton-inference-server-with-ensemble-models/
This example solve the same problem as you facing. https://github.com/triton-inference-server/tutorials/tree/main/Conceptual_Guide/Part_5-Model_Ensembles
This example solve the same problem as you facing. https://github.com/triton-inference-server/tutorials/tree/main/Conceptual_Guide/Part_5-Model_Ensembles @k9ele7en I have tried this but it is not working as wanted as recognition model is not getting converted to trt engine (model name parseq ) and Can you give a try please?
Sorry but I cannot try it right now. As far as I remember, I did tried to convert recog model to tenorRT but it failed. How about using CRAFT in TensorRT and vanila Torch checkpoint for Recognition model, both inside Ensemble mode of Triton?
Sorry but I cannot try it right now. As far as I remember, I did tried to convert recog model to tenorRT but it failed. How about using CRAFT in TensorRT and vanila Torch checkpoint for Recognition model, both inside Ensemble mode of Triton? @k9ele7en Yeah we can try !I had a question we will ensemble method increase the time of inferencing ?
@k9ele7en Canyou please tell how can i connectthis craft model to the recognitoon model using triton .Just like nvidia's ocdr model pipeline ? I need to do this fast so please help me that !