triton-inference-server / server

The Triton Inference Server provides an optimized cloud and edge inferencing solution.
https://docs.nvidia.com/deeplearning/triton-inference-server/user-guide/docs/index.html
BSD 3-Clause "New" or "Revised" License
7.79k stars 1.42k forks source link

SHARK Backend integration #4485

Open powderluv opened 2 years ago

powderluv commented 2 years ago

SHARK is a high performance codegen compiler and runtime built on MLIR, IREE and custom RL based tuning infrastructure. Here are some results of using SHARK for the same model across PyTorch, Onnx, TF/XLA and SHARK.

We have an Triton Inference Server integration of SHARK that runs on CPU and CUDA devices here: https://github.com/nod-ai/SHARK/tree/main/inference and we would like to upstream it as an available Triton backend that anyone can build, test and deploy.

tanmayv25 commented 2 years ago

@msalehiNV @dzier for visibility.