Closed yhwang closed 3 years ago
@wyljpn Just after posting that comment I realized that the Kubeflow created authorization policy already does the same (allow all communication within the namespace). Do all the pods you use with this TFX job run in your namespace? Adding the label to the pods so they aren't injected with Istio sidecar proxies should still work though.
@DavidSpek Thank you for your quick response.
Do all the pods you use with this TFX job run in your namespace?
Yes, those pods are under the same namespace as I show below.
"tfx-infraval-modelserver-zzl26" Pod was created by "10components-20210601-qljnc-1318940345" Pod.
And "10components-20210601-qljnc-1318940345" Pod sends requests to "tfx-infraval-modelserver-zzl26" Pod for getting responses.
NAMESPACE NAME READY STATUS RESTARTS AGE
kubeflow-user-example-com tfx-infraval-modelserver-zzl26 2/2 Running 0 2m39s
kubeflow-user-example-com 10components-20210601-qljnc-1318940345 2/2 Running 0 22m
...
Kubeflow created authorization policy already does the same (allow all communication within the namespace)
Do you mean they can communicate without setting any additional authorization policy, since the two Pods are under the same namespace? But if I don't apply an AuthorizationPolicy below, the InfraValidator will get "RBAC: access denied", so I am confused.
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
name: allow-all
namespace: kubeflow-user-example-com
spec:
rules:
- {}
Adding the label to the pods so they aren't injected with Istio sidecar proxies should still work though.
I am a green hand of Kubeflow, so far I didn't know what Istio sidecar proxies are and how to add the label to the pods, maybe I should work on it later.
Thank you for your kind response again.
:)
@wyljpn When you say:
But if I don't apply an AuthorizationPolicy below, the InfraValidator will get "RBAC: access denied", so I am confused.
What do you mean exactly? Is the InfraValidator a pod? Is that pod running in the same namespace as the pods it is trying to access? Are the requests going to the pods or services directly or are they making requests through the publicly accessible URI?
@DavidSpek Sorry for making you confused.
InfraValidator is a pod. It creates the other pod to load and serve a trained model. They are running in the sample namespace. And InfraValidator sends requests to the other pod for getting the model's status and prediction results.
The Error, "RBAC: access denied", was happened in InfraValidator, because InfraValidator pod couldn't access the other pod.
Based on the source code, I think the requests from InfraValidator pod are going to the other pod because it used pod_ip and container_port to create a Client then sent requests. Is my understanding right? @ConverJens https://github.com/tensorflow/tfx/blob/v0.30.0/tfx/components/infra_validator/executor.py#L390 https://github.com/tensorflow/tfx/blob/v0.30.0/tfx/components/infra_validator/model_server_runners/kubernetes_runner.py#L177
@wyljpn Thank you for taking the time to provide such a detailed explanation. Are you part of the Kubeflow Slack? I'd like to ask you some more questions about this to try and figure out what is going wrong with the Istio policues.
@DavidSpek I think this issue might be informative: https://github.com/tensorflow/tfx/issues/3893.
Basically, no pod that is started in the Argo flow has any istio sidecars. Hence, the IV serving pod shouldn't either.
@wyljpn Thank you for taking the time to provide such a detailed explanation. Are you part of the Kubeflow Slack? I'd like to ask you some more questions about this to try and figure out what is going wrong with the Istio policues.
@DavidSpek I haven't noticed there is a Kubeflow Slack community. Just created a Slack account and joined it a few minutes ago. My member ID is "U0253V66WCT", maybe you can use it to find me. My pleasure to communicate with you.
This code worked for me
`c = kfp.Client(host='http://xxxx:xxxx/pipeline')
c = create_run_from_pipeline_func(median_stop, arguments={}, experiment_name=experiment_name, namespace=experiment_namespace)`
add two variables experiment_name
and namespace
This code worked for me
`c = kfp.Client(host='http://xxxx:xxxx/pipeline')
c = create_run_from_pipeline_func(median_stop, arguments={}, experiment_name=experiment_name, namespace=experiment_namespace)`
add two variables
experiment_name
andnamespace
@nongqiqin Could you share some more details like host="value"
Which version are you using of Kubeflow , Pipeline and SDK ?
这段代码对我有用
c = kfp.Client(host=' http://xxxx:xxxx/pipeline ') c = create_run_from_pipeline_func(median_stop,arguments={},experiment_name=experiment_name,namespace=experiment_namespace)
添加两个变量experiment_name
和namespace
@nongqiqin你能分享一些更多的细节,比如 host="value"
您使用的是 Kubeflow、Pipeline 和 SDK 的哪个版本?
Kubelow Web UI ip:port
kubeflow: 1.2
Latest SDK
ServiceRoleBinding
Trying this out now on kubernetes 1.21.0 with kubeflow 1.3.0 and istio 1.9.0 (docker.io/istio/pilot:1.9.0).
Unfortunately, it seems that istio no longer supports the rbac.istio.io/v1alpha1 and has replaced the ServerRoleBinding object by something else ( https://istio.io/latest/blog/2019/v1beta1-authorization-policy/). The instructions for migrating from v1alpha1 to v1beta1 are a bit complex. Do you have an example of an equivalent yaml file for an authorization policy that replaces the ServerRoleBinding?
@ErikEngerd The below AuthorizationPolicy
and EnvoyFilter
works for me with k8s 1.18.19, kubeflow 1.3.0 and istio 1.9.5.
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
name: bind-ml-pipeline-nb-kubeflow-user-example-com
namespace: kubeflow
spec:
selector:
matchLabels:
app: ml-pipeline
rules:
- from:
- source:
principals: ["cluster.local/ns/kubeflow-user-example-com/sa/default-editor"]
---
apiVersion: networking.istio.io/v1alpha3
kind: EnvoyFilter
metadata:
name: add-header
namespace: kubeflow-user-example-com
spec:
configPatches:
- applyTo: VIRTUAL_HOST
match:
context: SIDECAR_OUTBOUND
routeConfiguration:
vhost:
name: ml-pipeline.kubeflow.svc.cluster.local:8888
route:
name: default
patch:
operation: MERGE
value:
request_headers_to_add:
- append: true
header:
key: kubeflow-userid
value: user@example.com
workloadSelector:
labels:
notebook-name: test-jupyterlab
Function to create and run pipeline.
kfp.Client().create_run_from_pipeline_func(calc_pipeline, arguments=arguments, namespace='kubeflow-user-example-com')
Many thanks for your example. It seems to be working now and I can open notebooks without an error message. Next up is testing whether it actually works. To make the configuration a bit more general, I replaced the label to select on in the workloadSelector by
istio.io/rev: default
This is a bit of a hack but it is a label that is present on all notebook servers I create within my namespace.
Many thanks for your example. It seems to be working now and I can open notebooks without an error message. Next up is testing whether it actually works. To make the configuration a bit more general, I replaced the label to select on in the workloadSelector by
istio.io/rev: default
This is a bit of a hack but it is a label that is present on all notebook servers I create within my namespace. KannanThiru and you AAAA!!!!3Q , Your solution is useful to me
@Bobgy
I studied the envoy filter more and here is a better version:
apiVersion: networking.istio.io/v1alpha3 kind: EnvoyFilter metadata: name: add-header namespace: mynamespace spec: configPatches: - applyTo: VIRTUAL_HOST match: context: SIDECAR_OUTBOUND routeConfiguration: vhost: name: ml-pipeline.kubeflow.svc.cluster.local:8888 route: name: default patch: operation: MERGE value: request_headers_to_add: - append: true header: key: kubeflow-userid value: user@example.com workloadSelector: labels: notebook-name: mynotebook
It directly uses the
custom request header
feature that http_connection_manager provides. Because the header name/value are fixed, no need to use lua filter.
Lets say i want to run pipeline from default namespace and i have wrapped the kubeflow API in FastAPI in python so what will be the steps for the envoyfilter yaml for default namespace?
Our official feature to support this use-case is https://github.com/kubeflow/pipelines/issues/5138. Because the PR has been merged and released: https://github.com/kubeflow/pipelines/pull/5676. Let's keep tracking the task of finishing documentation in #5138, and close this issue.
This is by design. In current phase, KFP api server needs a trusted source for user identity. @Bobgy since this is a security feature, is there a way to turn it off and go back to the older system, where a header could be specified via the CLI?
@ErikEngerd The below
AuthorizationPolicy
andEnvoyFilter
works for me with k8s 1.18.19, kubeflow 1.3.0 and istio 1.9.5.apiVersion: security.istio.io/v1beta1 kind: AuthorizationPolicy metadata: name: bind-ml-pipeline-nb-kubeflow-user-example-com namespace: kubeflow spec: selector: matchLabels: app: ml-pipeline rules: - from: - source: principals: ["cluster.local/ns/kubeflow-user-example-com/sa/default-editor"] --- apiVersion: networking.istio.io/v1alpha3 kind: EnvoyFilter metadata: name: add-header namespace: kubeflow-user-example-com spec: configPatches: - applyTo: VIRTUAL_HOST match: context: SIDECAR_OUTBOUND routeConfiguration: vhost: name: ml-pipeline.kubeflow.svc.cluster.local:8888 route: name: default patch: operation: MERGE value: request_headers_to_add: - append: true header: key: kubeflow-userid value: user@example.com workloadSelector: labels: notebook-name: test-jupyterlab
Function to create and run pipeline.
kfp.Client().create_run_from_pipeline_func(calc_pipeline, arguments=arguments, namespace='kubeflow-user-example-com')
Base on this, I have tried to modify the source code of jupyter-web-app-deployment and build an image, add an extra label for each new created pod. I don't know whether it's good solution.
apiVersion: networking.istio.io/v1alpha3
kind: EnvoyFilter
metadata:
name: add-header
namespace: kubeflow-user-example-com
spec:
configPatches:
- applyTo: VIRTUAL_HOST
match:
context: SIDECAR_OUTBOUND
routeConfiguration:
vhost:
name: ml-pipeline.kubeflow.svc.cluster.local:8888
route:
name: default
patch:
operation: MERGE
value:
request_headers_to_add:
- append: true
header:
key: kubeflow-userid
value: user@example.com
workloadSelector:
labels:
notebook-type: notebook
The label of notebook-type will allow all pod who has it .
def set_notebook_configurations(notebook, body, defaults):
notebook_labels = notebook["metadata"]["labels"]
labels = get_form_value(body, defaults, "configurations")
if not isinstance(labels, list):
raise BadRequest("Labels for PodDefaults are not list: %s" % labels)
for label in labels:
notebook_labels[label] = "true"
# add default label for pod
notebook_labels["notebook-type"] = "notebook"
Following the instructions access Kubeflow Pipelines from inside your cluster solved my access issue.
@dewnull @Bobgy Hi, after I create a PodDefault and a new notebook server with "Allow access" config, I still get
"RBAC: access denied"
error when using command
"curl -X GET 10.1.207.72:8888/apis/v1beta1/runs/75cd37e7-03d9-431b-a00d-a7451510bac1",
where 75cd37e7-03d9-431b-a00d-a7451510bac1 is an exist run in the same namespace
can you show your connect way to the pipeline in the notebook? for example, some python code?
apiVersion: networking.istio.io/v1alpha3
kind: EnvoyFilter
metadata:
name: add-header
namespace: kubeflow-user-example-com
spec:
configPatches:
- applyTo: VIRTUAL_HOST
match:
context: SIDECAR_OUTBOUND
routeConfiguration:
vhost:
name: ml-pipeline.kubeflow.svc.cluster.local:8888
route:
name: default
patch:
operation: MERGE
value:
request_headers_to_add:
- append: true
header:
key: kubeflow-userid
value: user@example.com
workloadSelector:
labels:
notebook-type: notebook
For reference, if you remove the workloadSelector, it will be applied to all notebooks in the namespace.
apiVersion: networking.istio.io/v1alpha3
kind: EnvoyFilter
metadata:
name: add-header
namespace: kubeflow-user-example-com
spec:
configPatches:
- applyTo: VIRTUAL_HOST
match:
context: SIDECAR_OUTBOUND
routeConfiguration:
vhost:
name: ml-pipeline.kubeflow.svc.cluster.local:8888
route:
name: default
patch:
operation: MERGE
value:
request_headers_to_add:
- append: true
header:
key: kubeflow-userid
value: user@example.com
@bonclay already do as you sugguested without workloadSelector, but still get "RBAC: access denied"
with command curl -X GET 10.1.207.72:8888/apis/v1beta1/runs/75cd37e7-03d9-431b-a00d-a7451510bac1
by the way, I can get experiments information with following python code before:
import kfp
client = kfp.Client()
print(client.list_experiments())
but after apply the EnvoyFilter.yaml, it will fail with RBAC: access denied
error
@grapefruitL Like the comments above, you also need to add the Authorization Policy. My comments are written assuming that the Authorization Policy is applied.
Just installed KubeFlow on AWS EKS few days ago. Used aws kfl manifests from: https://github.com/awslabs/kubeflow-manifests If I understand it correctly, installing it I was using 1.4.1 version of this manifests definition under the hood: https://github.com/kubeflow/manifests
All in all it works, so I created a new jupiter notebooks server. I'm using default user and have the issue with no access to Pipeline service via:
import kfp
client = kfp.Client()
print(client.list_experiments())
Error message:
ApiException: (500)
Reason: Internal Server Error
HTTP response headers: HTTPHeaderDict({'content-type': 'application/json', 'date': 'Mon, 23 May 2022 08:36:33 GMT', 'x-envoy-upstream-service-time': '15', 'server': 'envoy', 'transfer-encoding': 'chunked'})
HTTP response body: {"error":"Internal error: Unauthenticated: Request header error: there is no user identity header.: Request header error: there is no user identity header.\nFailed to authorize with API resource references\ngithub.com/kubeflow/pipelines/backend/src/common/util.Wrap\n\t/go/src/github.com/kubeflow/pipelines/backend/src/common/util/error.go:275\ngithub.com/kubeflow/pipelines/backend/src/apiserver/server.(*ExperimentServer)
I also tried:
import kfp
client = kfp.Client(host='http://ml-pipeline-ui.kubeflow:80')
print(client.list_experiments())
Error:
ApiException: (403)
Reason: Forbidden
HTTP response headers: HTTPHeaderDict({'content-length': '19', 'content-type': 'text/plain', 'date': 'Mon, 23 May 2022 08:40:03 GMT', 'server': 'envoy', 'x-envoy-upstream-service-time': '26'})
HTTP response body: RBAC: access denied
Also tried:
import kfp
client = kfp.Client(host='http://ml-pipeline-ui:80')
print(client.list_experiments())
Error message in this case:
[MaxRetryError: HTTPConnectionPool(host='ml-pipeline-ui', port=80): Max retries exceeded with url: /apis/v1beta1/experiments?
page_token=&page_size=10&sort_by=&resource_reference_key.type=NAMESPACE&resource_reference_key.id
=kubeflow-user-example-com (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f67a65cbe50>: Failed to establish a new connection
: [Errno -2] Name or service not known'))](http://ml-pipeline-ui:80'</span%3E%3Cspan)
I added this definition accordingly to documentation and recreated Jupiter server and checked the new config to have a token:
https://www.kubeflow.org/docs/components/pipelines/sdk/connect-api/
apiVersion: kubeflow.org/v1alpha1
kind: PodDefault
metadata:
name: access-ml-pipeline
namespace: kubeflow-user-example-com
spec:
desc: Allow access to Kubeflow Pipelines
selector:
matchLabels:
access-ml-pipeline: "true"
volumes:
- name: volume-kf-pipeline-token
projected:
sources:
- serviceAccountToken:
path: token
expirationSeconds: 7200
audience: pipelines.kubeflow.org
volumeMounts:
- mountPath: /var/run/secrets/kubeflow/pipelines
name: volume-kf-pipeline-token
readOnly: true
env:
- name: KF_PIPELINES_SA_TOKEN_PATH
value: /var/run/secrets/kubeflow/pipelines/token
Now in notebooks this code prints token data:
with open("/var/run/secrets/kubeflow/pipelines/token", "r") as f:
print(f.read())
Still, I have the same 403 errors
Any ideas on what I can do to reach pipelines service from notebooks? Do I need EnvoyFilter as well and why?
@rustam-ashurov-mcx try update kfp to 1.8.1: pip install kfp --upgrade
After lib update it works fine, exact code that works for me is the variant without anything really special:
client = kfp.Client()
print(client.list_experiments())
So all in all for me it was:
many thanks to @grapefruitL
import os
with open(os.environ['KF_PIPELINES_SA_TOKEN_PATH'], "r") as f:
TOKEN = f.read()
import kfp
client = kfp.Client(
host='http://ml-pipeline.kubeflow.svc.cluster.local:8888',
# host='http://ml-pipeline-ui.kubeflow.svc.cluster.local:80', # <--- Does not work as later causes HTTP response body: RBAC: access denied
# existing_token=TOKEN. # Not required
)
print(client.list_pipelines())
Result:
{'next_page_token': None,
'pipelines': [{'created_at': datetime.datetime(2022, 5, 22, 2, 5, 33, tzinfo=tzlocal()),
'default_version': {'code_source_url': None,
'created_at': datetime.datetime(2022, 5, 22, 2, 5, 33, tzinfo=tzlocal()),
'id': 'b693a0d3-b11c-4c5b-b3f9-6158382948d6',
'name': '[Demo] XGBoost - Iterative model '
'training',
'package_url': None,
'parameters': None,
'resource_references': [{'key': {'id': 'b693a0d3-b11c-4c5b-b3f9-6158382948d6',
'type': 'PIPELINE'},
'name': None,
'relationship': 'OWNER'}]},
'description': '[source '
'code](https://github.com/kubeflow/pipelines/blob/c8a18bde299f2fdf5f72144f15887915b8d11520/samples/core/train_until_good/train_until_good.py) '
'This sample demonstrates iterative training '
'using a train-eval-check recursive loop. The '
'main pipeline trains the initial model and '
'then gradually trains the model some more '
'until the model evaluation metrics are good '
'enough.',
'error': None,
'id': 'b693a0d3-b11c-4c5b-b3f9-6158382948d6',
'name': '[Demo] XGBoost - Iterative model training',
'parameters': None,
'resource_references': None,
'url': None},
{'created_at': datetime.datetime(2022, 5, 22, 2, 5, 34, tzinfo=tzlocal()),
'default_version': {'code_source_url': None,
'created_at': datetime.datetime(2022, 5, 22, 2, 5, 34, tzinfo=tzlocal()),
'id': 'c65b4f2e-362d-41a8-8f5c-9b944830029e',
'name': '[Demo] TFX - Taxi tip prediction '
'model trainer',
'package_url': None,
'parameters': [{'name': 'pipeline-root',
'value': 'gs://{{kfp-default-bucket}}/tfx_taxi_simple/{{workflow.uid}}'},
{'name': 'module-file',
'value': '/opt/conda/lib/python3.7/site-packages/tfx/examples/chicago_taxi_pipeline/taxi_utils_native_keras.py'},
{'name': 'push_destination',
'value': '{"filesystem": '
'{"base_directory": '
'"gs://your-bucket/serving_model/tfx_taxi_simple"}}'}],
'resource_references': [{'key': {'id': 'c65b4f2e-362d-41a8-8f5c-9b944830029e',
'type': 'PIPELINE'},
'name': None,
'relationship': 'OWNER'}]},
'description': '[source '
'code](https://github.com/kubeflow/pipelines/tree/c8a18bde299f2fdf5f72144f15887915b8d11520/samples/core/parameterized_tfx_oss) '
'[GCP Permission '
'requirements](https://github.com/kubeflow/pipelines/blob/c8a18bde299f2fdf5f72144f15887915b8d11520/samples/core/parameterized_tfx_oss#permission). '
'Example pipeline that does classification with '
'model analysis based on a public tax cab '
'dataset.',
'error': None,
'id': 'c65b4f2e-362d-41a8-8f5c-9b944830029e',
'name': '[Demo] TFX - Taxi tip prediction model trainer',
'parameters': [{'name': 'pipeline-root',
'value': 'gs://{{kfp-default-bucket}}/tfx_taxi_simple/{{workflow.uid}}'},
{'name': 'module-file',
'value': '/opt/conda/lib/python3.7/site-packages/tfx/examples/chicago_taxi_pipeline/taxi_utils_native_keras.py'},
{'name': 'push_destination',
'value': '{"filesystem": {"base_directory": '
'"gs://your-bucket/serving_model/tfx_taxi_simple"}}'}],
'resource_references': None,
'url': None},
{'created_at': datetime.datetime(2022, 5, 22, 2, 5, 35, tzinfo=tzlocal()),
'default_version': {'code_source_url': None,
'created_at': datetime.datetime(2022, 5, 22, 2, 5, 35, tzinfo=tzlocal()),
'id': '56bb7063-ade0-4074-9721-b063f42c46fd',
'name': '[Tutorial] Data passing in python '
'components',
'package_url': None,
'parameters': None,
'resource_references': [{'key': {'id': '56bb7063-ade0-4074-9721-b063f42c46fd',
'type': 'PIPELINE'},
'name': None,
'relationship': 'OWNER'}]},
'description': '[source '
'code](https://github.com/kubeflow/pipelines/tree/c8a18bde299f2fdf5f72144f15887915b8d11520/samples/tutorials/Data%20passing%20in%20python%20components) '
'Shows how to pass data between python '
'components.',
'error': None,
'id': '56bb7063-ade0-4074-9721-b063f42c46fd',
'name': '[Tutorial] Data passing in python components',
'parameters': None,
'resource_references': None,
'url': None},
{'created_at': datetime.datetime(2022, 5, 22, 2, 5, 36, tzinfo=tzlocal()),
'default_version': {'code_source_url': None,
'created_at': datetime.datetime(2022, 5, 22, 2, 5, 36, tzinfo=tzlocal()),
'id': '36b09aa0-a317-4ad4-a0ed-ddf55a485eb0',
'name': '[Tutorial] DSL - Control '
'structures',
'package_url': None,
'parameters': None,
'resource_references': [{'key': {'id': '36b09aa0-a317-4ad4-a0ed-ddf55a485eb0',
'type': 'PIPELINE'},
'name': None,
'relationship': 'OWNER'}]},
'description': '[source '
'code](https://github.com/kubeflow/pipelines/tree/c8a18bde299f2fdf5f72144f15887915b8d11520/samples/tutorials/DSL%20-%20Control%20structures) '
'Shows how to use conditional execution and '
'exit handlers. This pipeline will randomly '
'fail to demonstrate that the exit handler gets '
'executed even in case of failure.',
'error': None,
'id': '36b09aa0-a317-4ad4-a0ed-ddf55a485eb0',
'name': '[Tutorial] DSL - Control structures',
'parameters': None,
'resource_references': None,
'url': None},
{'created_at': datetime.datetime(2022, 5, 24, 6, 46, 45, tzinfo=tzlocal()),
'default_version': {'code_source_url': None,
'created_at': datetime.datetime(2022, 5, 24, 6, 46, 45, tzinfo=tzlocal()),
'id': 'da2bc8b4-27f2-4aa3-befb-c53487d9db49',
'name': 'test',
'package_url': None,
'parameters': [{'name': 'a', 'value': '1'},
{'name': 'b', 'value': '7'}],
'resource_references': [{'key': {'id': 'da2bc8b4-27f2-4aa3-befb-c53487d9db49',
'type': 'PIPELINE'},
'name': None,
'relationship': 'OWNER'}]},
'description': 'test',
'error': None,
'id': 'da2bc8b4-27f2-4aa3-befb-c53487d9db49',
'name': 'test',
'parameters': [{'name': 'a', 'value': '1'},
{'name': 'b', 'value': '7'}],
'resource_references': None,
'url': None}],
'total_size': 5}
> apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
labels:
app.kubernetes.io/component: ml-pipeline
app.kubernetes.io/name: kubeflow-pipelines
application-crd-id: kubeflow-pipelines
name: ml-pipeline
namespace: kubeflow
spec:
rules:
- from:
- source:
principals:
- cluster.local/ns/kubeflow/sa/ml-pipeline
- cluster.local/ns/kubeflow/sa/ml-pipeline-ui
- cluster.local/ns/kubeflow/sa/ml-pipeline-persistenceagent
- cluster.local/ns/kubeflow/sa/ml-pipeline-scheduledworkflow
- cluster.local/ns/kubeflow/sa/ml-pipeline-viewer-crd-service-account
- `cluster.local/ns/kubeflow/sa/kubeflow-pipelines-cache`
edit istio crd AuthorizationPolicy:ml-pipeline
in principals add cluster.local/ns/
i am facing similar problem, print(client.list_experiments()) works fine with kfp v1.8 after added the envoyfilter and authroization policy, but when i use kfp v2 i got the following error.
ApiException: (404) Reason: Not Found HTTP response headers: HTTPHeaderDict({'content-type': 'text/plain; charset=utf-8', 'x-content-type-options': 'nosniff', 'date': 'Sat, 04 May 2024 07:38:27 GMT', 'content-length': '10', 'x-envoy-upstream-service-time': '3', 'server': 'envoy'}) HTTP response body: Not Found
any idea how to resolve this?
What steps did you take:
In a multi-user enabled env, I created a notebook server on user's namespace, launch a notebook and try to call Python SDK from there. When I execute the code below:
What happened:
The API call was rejected with the following errors:
What did you expect to happen:
A pipeline run should be created and executed
Environment:
How did you deploy Kubeflow Pipelines (KFP)?
I installed the KFP on IKS with multi-user support KFP version: v1.1.0 KFP SDK version: v1.0.0
Anything else you would like to add:
[Miscellaneous information that will assist in solving the issue.]
/kind bug