Closed tarilabs closed 3 weeks ago
lgtm, but need to test it manually in a deployment
@lampajr please add explicit port numbers for the service.
@lampajr please add explicit port numbers for the service.
Hi @juliusvonkohout, given that we should make accessible all ports that are exposed by the model registry pod (@tarilabs can you confirm that?) I think that it is acceptable as it is (without any port specification, similarly to what other components are already doing)
I would rather have this pov: this networkpolicy proposal aligns with other examples as suggested by action item out of previous meeting (notes).
In short we need 2 "endpoints": REST API (8080) and gRPC (9090).
Could someone kindly comment regardings how to A/B test it, per original message, please? That is not clear to me, and sorry if asking a banal question
I would rather have this pov: this networkpolicy proposal aligns with other examples as suggested by action item out of previous meeting (notes).
In short we need 2 "endpoints": REST API (8080) and gRPC (9090).
Could someone kindly comment regardings how to A/B test it, per original message, please? That is not clear to me, and sorry if asking a banal question
It is rather trivial and best practices to list those two ports in the networkpolicy. A networkpolicy is just a firewall rule. If you install a full Kubeflow and can still reach both ports it is fine. For standalone tests I would add a deny all networkpolicy and this policy here on top to test.
I've added https://github.com/kubeflow/manifests/pull/2724/commits/31345c974f814d0af9c62b51cb46d73c750b50a8 but I need to test it on a vanilla KF cluster, hence keeping this as draft for now. Would this align better to expectations, please?
Next to my 2 comments I would like to add that gpt4 can write you such policies. Just input your service and deployments and it will figure it out.
Tested with:
diff --git a/common/networkpolicies/base/kustomization.yaml b/common/networkpolicies/base/kustomization.yaml
index 3592bc9a..33bf626c 100644
--- a/common/networkpolicies/base/kustomization.yaml
+++ b/common/networkpolicies/base/kustomization.yaml
@@ -16,6 +16,7 @@ resources:
- minio.yaml
- ml-pipeline-ui.yaml
- ml-pipeline.yaml
+ - model-registry.yaml
- poddefaults.yaml
- pvcviewer-webhook.yaml
- seldon.yaml
diff --git a/common/networkpolicies/base/model-registry.yaml b/common/networkpolicies/base/model-registry.yaml
new file mode 100644
index 00000000..801a2145
--- /dev/null
+++ b/common/networkpolicies/base/model-registry.yaml
@@ -0,0 +1,33 @@
+apiVersion: networking.k8s.io/v1
+kind: NetworkPolicy
+metadata:
+ name: model-registry
+ namespace: kubeflow
+spec:
+ podSelector:
+ matchExpressions:
+ - key: component
+ operator: In
+ values:
+ - model-registry-server
+ ingress:
+ - from:
+ - namespaceSelector:
+ matchExpressions:
+ - key: app.kubernetes.io/part-of
+ operator: In
+ values:
+ - kubeflow-profile
+ - namespaceSelector:
+ matchExpressions:
+ - key: kubernetes.io/metadata.name
+ operator: In
+ values:
+ - istio-system
+ ports:
+ - protocol: TCP
+ port: 8080
+ - protocol: TCP
+ port: 9090
+ policyTypes:
+ - Ingress
per latest reviews.
Without network policy:
WITH network policy:
Since I've tested locally on Minikube, I needed to ensure "calico". For example:
minikube start --cni calico --memory 8192 --cpus 6 # ...
[!CAUTION] the
--cni calico
need be supplied at the FIRST start, it cannot be applied to an already-existing minikube
outputs as (on my linux box)
$ minikube start --listen-address=0.0.0.0 --cni calico --memory 8192 --cpus 6
😄 minikube v1.33.1 on Fedora 40
✨ Automatically selected the docker driver. Other choices: qemu2, ssh
📌 Using Docker driver with root privileges
...
🔗 Configuring Calico (Container Networking Interface) ...
🔎 Verifying Kubernetes components...
▪ Using image gcr.io/k8s-minikube/storage-provisioner:v5
🌟 Enabled addons: storage-provisioner, default-storageclass
🏄 Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default
$ kubectl port-forward svc/istio-ingressgateway -n istio-system 8080:80
Forwarding from 127.0.0.1:8080 -> 8080
Forwarding from [::1]:8080 -> 8080
...
(notice the calico line explicitly output)
This might actually be analogous as experienced by end-user with:
/approve
/lgtm
[APPROVALNOTIFIER] This PR is APPROVED
This pull-request has been approved by: juliusvonkohout
The full list of commands accepted by this bot can be found here.
The pull request process is described here
Thank you @juliusvonkohout 🙏 🚀
@tarilabs I tested it on Microk8s 1.28. It works.
Raising as draft following indication on #2701 but not yet sure how to manually verify: I've tested with
1.9.0-rc.0
on a local cluster, both with and without this network policy, I can reach the service both externally the cluster (authenticated call) and from another pod in a different namespace (namespace "default" using wget from an hello-world pod).Which issue is resolved by this Pull Request: Resolves #2701
Description of your changes:
Checklist:
make generate-changed-only
make test