triton-inference-server / local_cache

Implementation of a local in-memory cache for Triton Inference Server's TRITONCACHE API
BSD 3-Clause "New" or "Revised" License
5 stars 1 forks source link

Paramterize git repository #12

Closed nv-kmcgill53 closed 8 months ago

nv-kmcgill53 commented 8 months ago

This PR parameterizes the github repository used when building.

Related internal PRs: Server: https://github.com/triton-inference-server/server/pull/6934 Core: https://github.com/triton-inference-server/core/pull/332 Backend: https://github.com/triton-inference-server/backend/pull/96 Checksum repository agent: https://github.com/triton-inference-server/checksum_repository_agent/pull/10 Dali backend: https://github.com/triton-inference-server/dali_backend/pull/228 Identity backend: https://github.com/triton-inference-server/identity_backend/pull/29 Onnxruntime backend: https://github.com/triton-inference-server/onnxruntime_backend/pull/244 Openvino backend: https://github.com/triton-inference-server/openvino_backend/pull/68 pytorch backend: https://github.com/triton-inference-server/pytorch_backend/pull/124 Redis cache: https://github.com/triton-inference-server/redis_cache/pull/14 Repeat backend: https://github.com/triton-inference-server/repeat_backend/pull/11 Square backend: https://github.com/triton-inference-server/square_backend/pull/18 Tensorflow backend: https://github.com/triton-inference-server/tensorflow_backend/pull/101 Tensorrt backend: https://github.com/triton-inference-server/tensorrt_backend/pull/81 Python backend:https://github.com/triton-inference-server/python_backend/pull/341 Client: https://github.com/triton-inference-server/client/pull/485

Related third party PRs: https://github.com/triton-inference-server/server/pull/6668