-
I'm using the Triton adapter with PVC mounts on OpenShift 4.12.
The Triton adapter seems not to have any parameter to set the pvcMountBase, and the default mount point does not exist in the adapter…
-
/kind bug
**What steps did you take and what happened:**
[A clear and concise description of what the bug is.]
Create a InferenceService
```yaml
apiVersion: serving.kserve.io/v1beta1
kind: In…
-
I am trying to port over a custom run time.
I created the ServingRunTime and Predictor following the example on https://github.com/kserve/modelmesh-serving/blob/main/docs/runtimes/custom_runtimes…
-
ModelMesh currently uses the MLServer runtime to serve sklearn, xgboost, and lightgbm models. However, it [seems like](https://developer.nvidia.com/blog/real-time-serving-for-xgboost-scikit-learn-rand…
-
New scheme:
~~~
apiVersion: "serving.kserve.io/v1beta1"
kind: "InferenceService"
metadata:
name: "sklearn-iris"
spec:
predictor:
model:
modelFormat:
name: sklearn
…
Jooho updated
9 months ago
-
The current ModelMesh Serving is namespace scoped, meaning all of its components must exist within a single namespace and only one instance of ModelMesh Serving can be installed per namespace. The…
-
Facing issue while loading the onnx based model in triton runtime.
Modelmesh version -0.9
triton-runtime - 2.x
kserve - 0.9
`ERROR Triton Adapter.Triton Adapter Server.Load Model Triton failed…
-
Configure KServe component to leverage Service Mesh unified authentication and authorization solution. This integration will streamline the security protocols, making it easier to manage, more secure,…
-
Links to tracker issues for components planning updates in this
- [x] ODH Operator
- [x] ODH Dashboard
- [x] Workbench
- [x] Data Science Pipelines
- [x] Model Serving
- [x] AI Explain…
-
### Goal
Add jest to the backend and test the migration of kserve
### Dependency issue
* https://github.com/opendatahub-io/odh-dashboard/issues/1975
### Itemized goals
* Add Jest to the…