-
1. We deployed PyTorch model using MLServer Serving runtime. Our goal is to get faster prediction with higher load.
So we created 5 replicas for each runtime:
But we sending mutiple parallel…
-
**Issue-1:**
We are trying to to autoscale the custom deployed runtime,, we have tried to specify the annotation and predictor parameters in Inference service manifest, but the scaling is not happeni…
-
### Is there an existing issue for this?
- [X] I have searched the existing issues
### Deploy type
OpenDataHub core version (eg. `v1.6.0`)
### Version
2.4
### Current Behavior
We currently have…
-
**Describe the bug**
Kserve manifests and Modelmesh manifests cannot be deployed together. There is a conflict when deploying Inferenceservice CRD
**Steps To Reproduce**
Steps to reproduce the…
-
### Goal
Decide which is the best approach for OVMS in Kserve
### Dependency issue
No dependencies
### Itemized goals
1. Right now OVMS is supporting both kserve and modelmesh
2. But it contain …
-
Running the kserve-integration UAT notebook from `main` branch on [a self-hosted runner fails](https://github.com/canonical/bundle-kubeflow/actions/runs/6864083100/job/18665095131#step:16:410) with th…
-
/kind feature
**Describe the solution you'd like**
We should upgrade XGBoost to bring all the improvements in recent release: https://github.com/dmlc/xgboost/releases/tag/v2.0.2
-
Below issue is coming on reconciliation of odh-model-controller.
```
2023-09-20T00:50:14Z ERROR Reconciler error {"controller": "inferenceservice", "controllerGroup": "serving.kserve.io", "controlle…
-
**Describe the bug**
DSC during upgrade from odh 2.4 to latest main fails and all conditions in dsc are in failed state.
**To Reproduce**
Steps to reproduce the behavior:
1. install odh 2.4
2. …
-
/kind feature
**Describe the solution you'd like**
[A clear and concise description of what you want to happen.]
Similar to the serving runtimes https://github.com/kserve/kserve/blob/414234294046…