Closed yhwang closed 3 years ago
so what is the correct format, i try to follow by https://www.kubeflow.org/docs/pipelines/multi-user/#in-cluster-api-request-authentication, where said: If you need to access the API endpoint from in-cluster workload like Jupyter notebooks or cron tasks, current suggested workaround is to connect through public endpoint
the code i written like this: pipeline = kfp.Client(host='http://istio-ingressgateway.newbase.com/_/pipeline/?ns=chejinguo').create_run_from_pipeline_func(mnist_pipeline, arguments={})
the host address is accesible by web browser
but reports error in jupyter notebook HTTP response body: {"error":"Validate experiment request failed.: Invalid input error: Invalid resource references for experiment. Expect one namespace type with owner relationship. Got: []"
@majorinche which Kubeflow deployment are you using? There should be a page on www.kubeflow.org introducing how to authenticate to KFP endpoint specific to your deployment.
@Bobgy sure!
- The RBAC to allow the notebook server in user's namespace: "mynamespace" to access ml-pipeline service
apiVersion: rbac.istio.io/v1alpha1 kind: ServiceRoleBinding metadata: name: bind-ml-pipeline-nb-mynamespace namespace: kubeflow spec: roleRef: kind: ServiceRole name: ml-pipeline-services subjects: - properties: source.principal: cluster.local/ns/mynamespace/sa/default-editor
- Envoy filter to inject the
kubeflow-userid
header from notebook to ml-pipeline service. In the example below, the notebook server's name ismynotebook
and userid for namespace: mynamespace isuser@example.com
apiVersion: networking.istio.io/v1alpha3 kind: EnvoyFilter metadata: name: add-header namespace: mynamespace spec: workloadSelector: labels: notebook-name: mynotebook configPatches: - applyTo: HTTP_FILTER match: context: SIDECAR_OUTBOUND listener: portNumber: 8888 filterChain: filter: name: "envoy.http_connection_manager" subFilter: name: "envoy.router" patch: operation: INSERT_BEFORE value: # lua filter specification name: envoy.lua config: inlineCode: | function envoy_on_request(request_handle) request_handle:headers():add("kubeflow-userid", "user@example.com") end
The envoy filter above only inject the
kubeflow-userid
HTTP header for those traffic going to ml-pipelie service
I tried to apply the same fix but it doesn't work for me somehow:
$ cat servicerolebinding.yaml envoyfilter.yaml
apiVersion: rbac.istio.io/v1alpha1
kind: ServiceRoleBinding
metadata:
name: bind-ml-pipeline-nb-anonymous
namespace: kubeflow
spec:
roleRef:
kind: ServiceRole
name: ml-pipeline-services
subjects:
- properties:
source.principal: cluster.local/ns/anonymous/default-editor
apiVersion: networking.istio.io/v1alpha3
kind: EnvoyFilter
metadata:
name: add-header
namespace: anonymous
spec:
configPatches:
- applyTo: VIRTUAL_HOST
match:
context: SIDECAR_OUTBOUND
routeConfiguration:
vhost:
name: ml-pipeline.kubeflow.svc.cluster.local:8888
route:
name: default
patch:
operation: MERGE
value:
request_headers_to_add:
- append: true
header:
key: kubeflow-userid
value: anonymous@kubeflow.org
workloadSelector:
labels:
notebook-name: kale
$ kubectl get ns anonymous -oyaml
apiVersion: v1
kind: Namespace
metadata:
annotations:
owner: anonymous@kubeflow.org
creationTimestamp: "2020-11-10T11:53:55Z"
...
$ kubectl get ns anonymous --show-labels
NAME STATUS AGE LABELS
anonymous Active 4h44m istio-injection=enabled,katib-metricscollector-injection=enabled,serving.kubeflow.org/inferenceservice=enabled
$ kubectl -n anonymous get po -l notebook-name=kale
NAME READY STATUS RESTARTS AGE
kale-0 2/2 Running 0 3h38m
Output from JupyterLab
Message: (403)
Reason: Forbidden
HTTP response headers: HTTPHeaderDict({'content-length': '19', 'content-type': 'text/plain', 'date': 'Tue, 10 Nov 2020 16:42:07 GMT', 'server': 'envoy', 'x-envoy-upstream-service-time': '0'})
HTTP response body: RBAC: access denied
I used deployment method for installation on Bare Metal: https://www.kubeflow.org/docs/started/k8s/kfctl-k8s-istio/
Any ideas why it doesn't work for anonymous@kubeflow.org ? I already stuck completely with it :(
@swiftdiaries Did I recall correctly you maintain this manifest, can you answer this question?
I would ping the appropriate WG that owns the config. I currently don't have the bandwidth to work on this
@mr-yaky in your ServiceRoleBinding
you should change source.principal: cluster.local/ns/anonymous/default-editor
to source.principal: cluster.local/ns/anonymous/sa/default-editor
Can you try it?
@yhwang thank you. I have changed how you recommended but now I get the new error:
Message: (400)
Reason: Bad Request
HTTP response headers: HTTPHeaderDict({'content-type': 'application/json', 'trailer': 'Grpc-Trailer-Content-Type', 'date': 'Fri, 13 Nov 2020 08:51:29 GMT', 'x-envoy-upstream-service-time': '15', 'server': 'envoy', 'transfer-encoding': 'chunked'})
HTTP response body: {"error":"Invalid input error: Invalid resource references for experiment. Namespace is empty.","message":"Invalid input error: Invalid resource references for experiment. Namespace is empty.","code":3,"details":[{"@type":"type.googleapis.com/api.Error","error_message":"Invalid resource references for experiment. Namespace is empty.","error_details":"Invalid input error: Invalid resource references for experiment. Namespace is empty."}]}
Well, I think now it's working correctly:
jovyan@kale-0:~$ kfp pipeline list
+--------------------------------------+-------------------------------------------------+---------------------------+
| Pipeline ID | Name | Uploaded at |
+======================================+=================================================+===========================+
| 271f4189-1bd3-425a-8b59-213f4a6502b2 | [Tutorial] DSL - Control structures | 2020-10-26T11:58:27+00:00 |
+--------------------------------------+-------------------------------------------------+---------------------------+
| e8989196-9105-41b2-b302-fe7b2a1f92cc | [Tutorial] Data passing in python components | 2020-10-26T11:58:26+00:00 |
+--------------------------------------+-------------------------------------------------+---------------------------+
| 3fb1b41c-dcca-4c12-88cf-cdb602c5c665 | [Demo] TFX - Iris classification pipeline | 2020-10-26T11:58:25+00:00 |
+--------------------------------------+-------------------------------------------------+---------------------------+
| ff55dd05-deb3-40c9-87c3-a8a06871b801 | [Demo] TFX - Taxi tip prediction model trainer | 2020-10-26T11:58:24+00:00 |
+--------------------------------------+-------------------------------------------------+---------------------------+
| 99666886-380b-488b-bd84-d3be3d12b2d8 | [Demo] XGBoost - Training with confusion matrix | 2020-10-26T11:58:23+00:00 |
+--------------------------------------+-------------------------------------------------+---------------------------+
@yhwang thank you. I think the error above is related to Kale already. So, I'll try to fix it on Kale side.
For me trying these workarounds (on KF 1.2) results in Error from server: error when creating ".\\envoy_filter.yaml": admission webhook "pilot.validation.istio.io" denied the request: configuration is invalid: envoy filter: missing filters
@karlschriek Could you post the envoy filter you applied, along with the relevant information about the cluster such as the namespace and notebook name?
@yanniszark Is there any more information regarding the timeline of the upstream push of mTLS and SubjectAccessReview?
This is what I used:
EDIT:
Fixed after @DavidSpek's comment below
export NAMESPACE=mynamespace
export NOTEBOOK=mynotebook
export USER=me@myemail.com
cat > ./envoy_filter.yaml << EOM
apiVersion: rbac.istio.io/v1alpha1
kind: ServiceRoleBinding
metadata:
name: bind-ml-pipeline-nb-${NAMESPACE}
namespace: kubeflow
spec:
roleRef:
kind: ServiceRole
name: ml-pipeline-services
subjects:
- properties:
source.principal: cluster.local/ns/${NAMESPACE}/sa/default-editor
---
apiVersion: networking.istio.io/v1alpha3
kind: EnvoyFilter
metadata:
name: add-header
namespace: ${NAMESPACE}
spec:
configPatches:
- applyTo: VIRTUAL_HOST
match:
context: SIDECAR_OUTBOUND
routeConfiguration:
vhost:
name: ml-pipeline.kubeflow.svc.cluster.local:8888
route:
name: default
patch:
operation: MERGE
value:
request_headers_to_add:
- append: true
header:
key: kubeflow-userid
value: ${USER}
workloadSelector:
labels:
notebook-name: ${NOTEBOOK}
EOM
@karlschriek You have the kubeflow-userid set to your namespace rather than your userid (such as user@example.com
).
Sorry, my bad. I no longer had the script I used this morning so I quickly put something together to answer you, tried it, saw it gave the same result and posted it. Have now fixed it and can confirm that it still gives the same "missing filters" error
I got the same problem that @karlschriek. In a further investigation, I discovered that KF v1.1 and above is using a very outdated istio version (1.1.6) so the EnvoyFilter @yhwang provided is not compatible with this version.
I tried to port the filter to be compatible with version 1.1.6 but it still doesn't work.
kind: EnvoyFilter
metadata:
name: add-header
namespace: __namepace__
spec:
filters:
- listenerMatch:
listenerType: SIDECAR_OUTBOUND
listenerProtocol: HTTP
address:
- ml-pipeline.kubeflow.svc.cluster.local
portNumber: 8888
filterName: envoy.lua
filterType: HTTP
filterConfig:
inlineCode: |
function envoy_on_request(request_handle)
request_handle:headers():add("kubeflow-userid", "anonymous@kubeflow.org)
end
workloadLabels:
notebook-name: __notebook__
Error:
Reason: Conflict
HTTP response headers: HTTPHeaderDict({'content-type': 'application/json', 'trailer': 'Grpc-Trailer-Content-Type', 'date': 'Wed, 02 Dec 2020 00:26:05 GMT', 'x-envoy-upstream-service-time': '2', 'server': 'envoy', 'transfer-encoding': 'chunked'})
HTTP response body: {"error":"Failed to authorize with API resource references: Bad request.: BadRequestError: Request header error: there is no user identity header.: Request header error: there is no user identity header.","message":"Failed to authorize with API resource references: Bad request.: BadRequestError: Request header error: there is no user identity header.: Request header error: there is no user identity header.","code":10,"details":[{"@type":"type.googleapis.com/api.Error","error_message":"Request header error: there is no user identity header.","error_details":"Failed to authorize with API resource references: Bad request.: BadRequestError: Request header error: there is no user identity header.: Request header error: there is no user identity header."}]}
EDIT: Very important to mention I'm on AWS.
actually, the envoyfilter I posted here: https://github.com/kubeflow/pipelines/issues/4440#issuecomment-687703390 is based on istio 1.3.1. I upgraded my env to kfp v1.2 today and the envoyfilter still works properly for me.
I've been using the envoyfilter with Kubeflow 1.1 and 1.2 with istio 1.3.1 and have also not had any issues.
@yhwang @DavidSpek thank you for your considerations.
It turns out that the kfctl configuration file I used to install kubeflow doesn't contain the istio-stack-1-3-1
and cluster-local-gateway-1-3-1
, therefore kubeflow was installed based on istio 1.1.6.
The configuration file I used was the recommended one to provide authentication via OIDC https://raw.githubusercontent.com/kubeflow/manifests/v1.1-branch/kfdef/kfctl_aws_cognito.v1.1.0.yaml
(https://www.kubeflow.org/docs/aws/deploy/install-kubeflow/)
Do you know if the authentication via OIDC/Cognito requires istio 1.1.6? Would update istio mess up with existing kubeflow installation?
@yhwang Is there a plan to add this as a PR to include the servicerolebinding and Envoyfilter to be created every time a new notebook server is created in a user namespace? If there isn't, how do you propose I can solve this? Thanks
@HassanOuda As discussed above the ServiceRoleBinding and EnvoyFilter are workarounds and should not be seen as a secure solution. https://github.com/kubeflow/pipelines/issues/4440#issuecomment-697317390
The proper implementation will hopefully be pushed upstream by @yanniszark soon.
@pedrocwb I have a similar question about 1.1.6 vs 1.3.1. For me the more relevant case is being able to authenticate from outside the cluster. Currently this requires passing the Cognito cookies. I have managed to get this to work with 1.1.6, but it actually looks like this currently doesn't work with 1.3.1.
Even though I pass the correct cookies, I still get the Request header error: there is no user identity header
error. I am going to spend some more time on this today and will give you feedback if I know a bit more.
For our clients the two most important KF components are KFP and KFServing. At the moment we can use KFP with 1.1.6, but not 1.3.1. And only a very old version of KFServing seems to be compatible with 1.1.6.
For reference, SubjectAccessReview has been merged in https://github.com/kubeflow/pipelines/pull/4723. This is available in https://github.com/kubeflow/pipelines/releases/tag/1.2.0 (first available in https://github.com/kubeflow/pipelines/releases/tag/1.1.1-beta.1). However, from looking at https://github.com/kubeflow/pipelines/issues/3513 regarding SubjectAccessReview, it is not clear to me if Istio mTLS support has been added for in-cluster authentication.
Useful documentation: https://docs.google.com/document/d/1R9bj1uI0As6umCTZ2mv_6_tjgFshIKxkSt00QLYjNV4/edit?ts=5e4d8fbb#heading=h.b3vxor3gcdvs
it is not clear to me if Istio mTLS support has been added for in-cluster authentication.
No, it's not added
thanks @yhwang , the suggestion works
@yhwang thank you. I have changed how you recommended but now I get the new error:
Message: (400) Reason: Bad Request HTTP response headers: HTTPHeaderDict({'content-type': 'application/json', 'trailer': 'Grpc-Trailer-Content-Type', 'date': 'Fri, 13 Nov 2020 08:51:29 GMT', 'x-envoy-upstream-service-time': '15', 'server': 'envoy', 'transfer-encoding': 'chunked'}) HTTP response body: {"error":"Invalid input error: Invalid resource references for experiment. Namespace is empty.","message":"Invalid input error: Invalid resource references for experiment. Namespace is empty.","code":3,"details":[{"@type":"type.googleapis.com/api.Error","error_message":"Invalid resource references for experiment. Namespace is empty.","error_details":"Invalid input error: Invalid resource references for experiment. Namespace is empty."}]}
Hi @mr-yaky,
Would you please share some more details about how to solve the below error?
Message: (400)
Reason: Bad Request
HTTP response headers: HTTPHeaderDict({'content-type': 'application/json', 'trailer': 'Grpc-Trailer-Content-Type', 'date': 'Fri, 13 Nov 2020 08:51:29 GMT', 'x-envoy-upstream-service-time': '15', 'server': 'envoy', 'transfer-encoding': 'chunked'})
HTTP response body: {"error":"Invalid input error: Invalid resource references for experiment. Namespace is empty.","message":"Invalid input error: Invalid resource references for experiment. Namespace is empty.","code":3,"details":[{"@type":"type.googleapis.com/api.Error","error_message":"Invalid resource references for experiment. Namespace is empty.","error_details":"Invalid input error: Invalid resource references for experiment. Namespace is empty."}]}
Solved based on https://github.com/kubeflow-kale/kale/issues/210#issuecomment-727018461
@kosehy my guess is you need to specify the namespace since it complains about empty namespace
@kosehy my guess is you need to specify the namespace since it complains about empty namespace
@yhwang You are right. I fixed above error based on https://github.com/kubeflow-kale/kale/issues/210#issuecomment-727018461 this comment. Thank you for your reply!
@yhwang, would appreciate some help from you. I also cannot access pipelines from the notebook
when I run kfp -n kubeflow pipeline list
in the terminal I get the following error
Reason: Forbidden HTTP response headers: HTTPHeaderDict({'Cache-Control': 'no-cache, private', 'Content-Length': '19', 'Content-Type': 'text/plain', 'Date': 'Thu, 04 Mar 2021 00:50:15 GMT', 'Server': 'istio-envoy', 'X-Envoy-Decorator-Operation': 'ml-pipeline.kubeflow.svc.cluster.local:8888/*'}) HTTP response body: RBAC: access denied
I have tried to add the istio injection for the namespace kubeflow, but it does not work. The update yaml file I used is the following:
kind: EnvoyFilter
metadata:
name: add-header
namespace: kubeflow
spec:
configPatches:
- applyTo: VIRTUAL_HOST
match:
context: SIDECAR_OUTBOUND
routeConfiguration:
vhost:
name: ml-pipeline.kubeflow.svc.cluster.local:8888
route:
name: default
patch:
operation: MERGE
value:
request_headers_to_add:
- append: true
header:
key: kubeflow-userid
value: anonymouss@kubeflow.org
workloadSelector:
labels:
notebook-name: mynotebook
How can I make the namespace able to access the pipeline information? Is it necessary for me to start a notebook server and do the above steps? Can I use the command line to do the same access?
Thanks!
@perseusyang1997 Based on your description, I think you were trying to use kfp CLI to access the kubeflow pipeline via kube-api server. Since the value of kubeflow-userid header you are using is anonymous@kubeflow.org
(you have an extra s
in your envoyfilter yaml), I just wonder are you using single user set up or multi-user? The purpose of the envoyfilter
you posted is to add kubeflow-userid
header for the out going traffics from the notebook server in a user's namespace to kfp api service. Therefore, it should be applied to the user's namespace where the notebook server is but not kubeflow
namespace. And it doesn't help the kfp CLI use case. Short answers for your question are:
Is it necessary for me to start a notebook server and do the above steps?
Yes. And please add the envoyfilter to the same namespace as the notebook server.
Can I use the command line to do the same access?
Yes and No. Using kfp CLI to access kubeflow pipeline is different. If you don't specify the --endpoint
argument pointing to your kfp api url, it goes through the kube-api server and use it as a proxy to access the kfp api server. In this case you, you will hit the RBAC access error. If you do specify the kfp api url and your kubeflow is deployed on GCP, you should be able to use the kfp CLI to access kubeflow pipeline. You can check the document here: https://www.kubeflow.org/docs/gke/pipelines/authentication-sdk/
@perseusyang1997 Please be aware that this workaround is not actually secure.
@yanniszark will this be fixed in the 1.3 release?
hello guys, I'm working on AWS and the RBAC and Envoy filter fixed my problems when I was using Kubeflow without Cognito, but now I'm changing the deployment in order to use Cognito as auth and as mentioned above by @pedrocwb and @karlschriek when trying to apply Envoy filter I have the following return:
Error from server: error when creating "envoy.yaml": admission webhook "pilot.validation.istio.io" denied the request: configuration is invalid: envoy filter: missing filters
@MatheusPush Not sure if it is related, but the maximum Istio version for Cognito is 1.1 I believe.
@Bobgy sure!
- The RBAC to allow the notebook server in user's namespace: "mynamespace" to access ml-pipeline service
apiVersion: rbac.istio.io/v1alpha1 kind: ServiceRoleBinding metadata: name: bind-ml-pipeline-nb-mynamespace namespace: kubeflow spec: roleRef: kind: ServiceRole name: ml-pipeline-services subjects: - properties: source.principal: cluster.local/ns/mynamespace/sa/default-editor
- Envoy filter to inject the
kubeflow-userid
header from notebook to ml-pipeline service. In the example below, the notebook server's name ismynotebook
and userid for namespace: mynamespace isuser@example.com
apiVersion: networking.istio.io/v1alpha3 kind: EnvoyFilter metadata: name: add-header namespace: mynamespace spec: workloadSelector: labels: notebook-name: mynotebook configPatches: - applyTo: HTTP_FILTER match: context: SIDECAR_OUTBOUND listener: portNumber: 8888 filterChain: filter: name: "envoy.http_connection_manager" subFilter: name: "envoy.router" patch: operation: INSERT_BEFORE value: # lua filter specification name: envoy.lua config: inlineCode: | function envoy_on_request(request_handle) request_handle:headers():add("kubeflow-userid", "user@example.com") end
The envoy filter above only inject the
kubeflow-userid
HTTP header for those traffic going to ml-pipelie service
@yhwang Hi Yh, I have a single user kubeflow setup, in this case, what should be the value for kubeflow-userid
@omlomloml
I have a single user kubeflow setup, in this case, what should be the value for kubeflow-userid
It should be anonymous@kubeflow.org
@omlomloml
I have a single user kubeflow setup, in this case, what should be the value for kubeflow-userid
It should be
anonymous@kubeflow.org
@yhwang Thank you so much! @lukemarsden also provided slightly different work around at https://github.com/kubeflow/pipelines/issues/4440#issuecomment-700759162 can you guys explain what is the difference between these to work arounds
Thanks
I am a newbee here
@omlomloml Please read this comment as it also explains why these workarounds are not secure. The proper solution should be included in release 1.3 which is soon. https://github.com/kubeflow/pipelines/issues/4440#issuecomment-697317390
@yanniszark Is there anything more than needs to happen to solve this issue for 1.3? Or was everything regarding the SubjectAccessReview already merged?
@yhwang @DavidSpek Hi guy, after I applied the binding and the filter I still can't get it work, did I do anything wrong, I am not using the notebook here, so I am adding the filter to all the workload
here is the bingding: root@metis1-1:~# kubectl -n kubeflow get ServiceRoleBinding bind-ml-pipeline-metis -o yaml apiVersion: rbac.istio.io/v1alpha1 kind: ServiceRoleBinding metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {"apiVersion":"rbac.istio.io/v1alpha1","kind":"ServiceRoleBinding","metadata":{"annotations":{},"name":"bind-ml-pipeline-metis","namespace":"kubeflow"},"spec":{"roleRef":{"kind":"ServiceRole","name":"ml-pipeline-services"},"subjects":[{"properties":{"source.principal":"cluster.local/ns/metis/sa/default"}}]}} creationTimestamp: "2021-03-10T15:01:52Z" generation: 1 managedFields:
and here is the filter:
root@metis1-1:~# kubectl -n metis get envoyfilters.networking.istio.io add-header -o yaml apiVersion: networking.istio.io/v1alpha3 kind: EnvoyFilter metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {"apiVersion":"networking.istio.io/v1alpha3","kind":"EnvoyFilter","metadata":{"annotations":{},"name":"add-header","namespace":"metis"},"spec":{"configPatches":[{"applyTo":"VIRTUAL_HOST","match":{"context":"SIDECAR_OUTBOUND","routeConfiguration":{"vhost":{"name":"ml-pipeline.kubeflow.svc.cluster.local:8888","route":{"name":"default"}}}},"patch":{"operation":"MERGE","value":{"request_headers_to_add":[{"append":true,"header":{"key":"kubeflow-userid","value":"anonymous@kubeflow.org"}}]}}}]}} creationTimestamp: "2021-03-10T15:23:13Z" generation: 1 managedFields:
but I can't see anything is added root@metis-backend-857fc5b98d-c4qc7:/metis/metis/aix# curl -I http://ml-pipeline.kubeflow.svc.cluster.local:8888 HTTP/1.1 403 Forbidden content-length: 19 content-type: text/plain date: Wed, 10 Mar 2021 15:33:45 GMT server: istio-envoy x-envoy-decorator-operation: ml-pipeline.kubeflow.svc.cluster.local:8888/*
Thanks!
For people tracking this issue, the correct solution will come from issue: https://github.com/kubeflow/pipelines/issues/5138
@yanniszark in 1.3 release?
Thanks a lot! Finally, I'm able to call pipeline API form my notebook in my MiniKube cluster (win10)
client = kfp.Client('http://10.108.141.218',namespace='admin') client.list_runs(namespace='admin')
Thanks a lot! Finally, I'm able to call pipeline API form my notebook in my MiniKube cluster (win10)
client = kfp.Client('http://10.108.141.218',namespace='admin') client.list_runs(namespace='admin')
Would you mind to share what you did exactly?
import kfp
NAMESPACE = 'kubeflow-user-example-com'
EXPERIMENT ='default'
gateway = 'http://istio-ingressgateway.istio-system.svc.cluster.local'
#gateway = 'http://ml-pipeline.kubeflow.svc.cluster.local'
client = kfp.Client(host = gateway + '/pipeline', namespace = NAMESPACE)
client.list_experiments(namespace = NAMESPACE)
did not work for me on a fresh install from the 1.3 branch with my own jupyterlab image
My expectation from the description is, that i do not need to authenticate to ml-pipeline if the traffic comes from a pod that runs as the default-editor serviceaccount.
seems the workaround no longer works for kubeflow 1.3. To be more precise, the config
doesn't run, with the failure information: error: unable to recognize "manifests/patches/bind-ml-pipeline-nb-kubeflow-user-example-com.yaml": no matches for kind "ServiceRoleBinding" in version "rbac.istio.io/v1alpha1"
content of bind-ml-pipeline-nb-kubeflow-user-example-com.yaml
apiVersion: rbac.istio.io/v1alpha1
kind: ServiceRoleBinding
metadata:
name: bind-ml-pipeline-nb-kubeflow-user-example-com
namespace: kubeflow
spec:
roleRef:
kind: ServiceRole
name: ml-pipeline-services
subjects:
- properties:
source.principal: cluster.local/ns/kubeflow-user-example-com/sa/default-editor
@frankmanbb I also encountered this problem
@DianaDai @frankmanbb This is because Istio RBAC has been removed since version 1.6, and kubeflow 1.3 uses Istio 1.9. You could achieve the same by using an AuthorizationPolicy instead.
@DavidSpek code show as below,I still report an error after configuration
here is my error:
Do I need to configure anything else?
@DavidSpek I added another EnvoyFilter configuration
change to another error:
@DianaDai Just wanted to state here that EnfoyFilters are considered break glass configurations, meaning that they may break between Envoy versions and can destabilize the entire service mesh (or cause it to go down). So using this workaround can have bad consequences.
It's probably better to look at https://github.com/kubeflow/pipelines/issues/5138 which provides a proper method to authenticate notebooks to the KFP API.
Have you tried the EnoyFilter that is mentioned above instead?
apiVersion: networking.istio.io/v1alpha3
kind: EnvoyFilter
metadata:
name: add-header
namespace: mynamespace
spec:
workloadSelector:
labels:
notebook-name: mynotebook
configPatches:
- applyTo: HTTP_FILTER
match:
context: SIDECAR_OUTBOUND
listener:
portNumber: 8888
filterChain:
filter:
name: "envoy.http_connection_manager"
subFilter:
name: "envoy.router"
patch:
operation: INSERT_BEFORE
value: # lua filter specification
name: envoy.lua
config:
inlineCode: |
function envoy_on_request(request_handle)
request_handle:headers():add("kubeflow-userid", "user@example.com")
end
I am using TFX on kubeflow, and got "grpc_message":"RBAC: access denied","grpc_status":7 when running InfraValidator. InfraValidator creates a Pod under the same namespace and sends requests to the Pod for getting responses. I had applied RoleBinding, ClusterRoleBinding, however, it didn't work. By apply AuthorizationPolicy for the namespace, InfraValidator can pass successfully.
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
name: allow-all
namespace: kubeflow-user-example-com
spec:
rules:
- {}
@wyljpn You should not apply this AuthorizationPolicy, as it allows all traffic from anywhere to access your namespace. Which would also mean others can access your notebook instances, for example. If you want to allow all communication within the namespace you need to add a selector for the same namespace.
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
name: allow-all
namespace: kubeflow-user-example-com
spec:
rules:
- from:
- source:
namespaces:
- kubeflow-user-example-com
Another option would be to disable Istio injection for the TFX pods by adding the sidecar.istio.io/inject: "false"
label to the pods.
@DianaDai Just wanted to state here that EnfoyFilters are considered break glass configurations, meaning that they may break between Envoy versions and can destabilize the entire service mesh (or cause it to go down). So using this workaround can have bad consequences.
It's probably better to look at #5138 which provides a proper method to authenticate notebooks to the KFP API.
Have you tried the EnoyFilter that is mentioned above instead?
apiVersion: networking.istio.io/v1alpha3 kind: EnvoyFilter metadata: name: add-header namespace: mynamespace spec: workloadSelector: labels: notebook-name: mynotebook configPatches: - applyTo: HTTP_FILTER match: context: SIDECAR_OUTBOUND listener: portNumber: 8888 filterChain: filter: name: "envoy.http_connection_manager" subFilter: name: "envoy.router" patch: operation: INSERT_BEFORE value: # lua filter specification name: envoy.lua config: inlineCode: | function envoy_on_request(request_handle) request_handle:headers():add("kubeflow-userid", "user@example.com") end
@DavidSpek yes,I have tried,But there is still something wrong
@wyljpn You should not apply this AuthorizationPolicy, as it allows all traffic from anywhere to access your namespace. Which would also mean others can access your notebook instances, for example. If you want to allow all communication within the namespace you need to add a selector for the same namespace.
apiVersion: security.istio.io/v1beta1 kind: AuthorizationPolicy metadata: name: allow-all namespace: kubeflow-user-example-com spec: rules: - from: - source: namespaces: - kubeflow-user-example-com
Another option would be to disable Istio injection for the TFX pods by adding the
sidecar.istio.io/inject: "false"
label to the pods.
@DavidSpek Thank you so much for your kind reply. I do want to allow all communication within the namespace, so I tried to apply an AuthorizationPolicy as you posted and ran the TFX pipeline on Kubeflow, but InfraValidator got the same error.
<_InactiveRpcError of RPC that terminated with:
status = StatusCode.PERMISSION_DENIED
details = "RBAC: access denied"
debug_error_string = "{"created":"@1622787226.963275600","description":"Error received from peer ipv4:172.17.0.17:8500","file":"src/core/lib/surface/call.cc","file_line":1061,"grpc_message":"RBAC: access denied","grpc_status":7}"
>
I also tried to add all the namespaces, but it didn't work too.
By the way, I noticed that an AuthorizationPolicy was created when I set up Kubeflow, maybe it is related to "RBAC: access denied" I have met?
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
annotations:
role: admin
user: user@example.com
creationTimestamp: "2021-06-04T04:45:58Z"
generation: 1
name: ns-owner-access-istio
namespace: kubeflow-user-example-com
ownerReferences:
- apiVersion: kubeflow.org/v1
blockOwnerDeletion: true
controller: true
kind: Profile
name: kubeflow-user-example-com
uid: 5171631b-b45e-422f-aef4-3a523593549f
resourceVersion: "421173"
uid: d8f96486-a153-41cd-aaf6-7beaee3d4134
spec:
rules:
- when:
- key: request.headers[kubeflow-userid]
values:
- user@example.com
- when:
- key: source.namespace
values:
- kubeflow-user-example-com
What steps did you take:
In a multi-user enabled env, I created a notebook server on user's namespace, launch a notebook and try to call Python SDK from there. When I execute the code below:
What happened:
The API call was rejected with the following errors:
What did you expect to happen:
A pipeline run should be created and executed
Environment:
How did you deploy Kubeflow Pipelines (KFP)?
I installed the KFP on IKS with multi-user support KFP version: v1.1.0 KFP SDK version: v1.0.0
Anything else you would like to add:
[Miscellaneous information that will assist in solving the issue.]
/kind bug