Open scholtzan opened 3 years ago
this was removed in favor of adopting the REST API defined in argo workflow (hence the 4.0) over kubernetes API access.
Can you check the argo submission example in the README? is that acceptable to you?
(Also, providing an code snippet would also help)
I've been trying to use kubernetes.config.load_kube_config
together with argo-client-python
. This used to work for argo 2.11.8 with an older version of this SDK for me (my fork at argo-workflows-fvdnabee==3.6.0). But since updating my cluster to argo 3.0.1 and the sdk to 5.0.0, it doesn't work for me.
An example that gets a workflow:
from __future__ import print_function
import kubernetes.client
from kubernetes import config
import argo.workflows.client as argo_client
configuration = kubernetes.client.Configuration()
config.load_kube_config(client_configuration=configuration)
with kubernetes.client.ApiClient(configuration) as api_client:
# Create an instance of the API class
api_instance = kubernetes.client.CoreV1Api(api_client)
print("Listing pods with their IPs:")
ret = api_instance.list_pod_for_all_namespaces(watch=False)
for i in ret.items:
print("%s\t%s\t%s" % (i.status.pod_ip, i.metadata.namespace, i.metadata.name))
configuration = argo_client.Configuration()
config.load_kube_config(client_configuration=configuration)
with argo_client.ApiClient(configuration=configuration) as api_client:
namespace = "default"
name = "my-workflow"
# Create an instance of the API class
service = argo_client.WorkflowServiceApi(api_client)
argo_api_response = service.get_workflow(namespace, name)
print(argo_api_response)
kubernetes.client.CoreV1Api.list_pod_for_all_namespaces
from python kubernetes is working in this example for me, but argo.workflows.client.WorkflowServiceApi.get_workflow
throws the following exception:
Traceback (most recent call last):
File "example.py", line 24, in <module>
argo_api_response = service.get_workflow(namespace, name)
File "/tmp/k8s-client-test/.env/lib/python3.6/site-packages/argo/workflows/client/api/workflow_service_api.py", line 341, in get_workflow
return self.get_workflow_with_http_info(namespace, name, **kwargs) # noqa: E501
File "/tmp/k8s-client-test/.env/lib/python3.6/site-packages/argo/workflows/client/api/workflow_service_api.py", line 445, in get_workflow_with_http_info
collection_formats=collection_formats)
File "/tmp/k8s-client-test/.env/lib/python3.6/site-packages/argo/workflows/client/api_client.py", line 369, in call_api
_preload_content, _request_timeout, _host)
File "/tmp/k8s-client-test/.env/lib/python3.6/site-packages/argo/workflows/client/api_client.py", line 188, in __call_api
raise e
File "/tmp/k8s-client-test/.env/lib/python3.6/site-packages/argo/workflows/client/api_client.py", line 185, in __call_api
_request_timeout=_request_timeout)
File "/tmp/k8s-client-test/.env/lib/python3.6/site-packages/argo/workflows/client/api_client.py", line 393, in request
headers=headers)
File "/tmp/k8s-client-test/.env/lib/python3.6/site-packages/argo/workflows/client/rest.py", line 234, in GET
query_params=query_params)
File "/tmp/k8s-client-test/.env/lib/python3.6/site-packages/argo/workflows/client/rest.py", line 224, in request
raise ApiException(http_resp=r)
argo.workflows.client.exceptions.ApiException: (403)
Reason: Forbidden
HTTP response headers: HTTPHeaderDict({'Audit-Id': '519ef8aa-7e2b-4ce3-a17f-561c615f029f', 'Cache-Control': 'no-cache, private', 'Content-Type': 'application/json', 'X-Content-Type-Options': 'nosniff', 'Date': 'Tue, 20 Apr 2021 14:27:45 GMT', 'Content-Length': '315'})
HTTP response body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"workflows \"default\" is forbidden: User \"system:anonymous\" cannot get resource \"workflows/my-workflow\" in API group \"\" at the cluster scope","reason":"Forbidden","details":{"name":"default","kind":"workflows"},"code":403}
This appears to be an authorization issue (I'm using AWS EKS as a cluster configured through ~/.kube/config
here), but it was working like this in argo-workflows 3.6.0 and the kubernetes python client (first part of the example above) is working fine. kubectl on the CLI (which uses the same kubeconfig) is also working. Considering all of this, the issue might lie with argo-workflows: e.g. python kubernetes inits the Configuration object but this object is no longer valid for argo-workflows. Any suggestions?
I'm not sure if this is helpful, but I ran into the same issue. I'm by no means an Argo or Kubernetes expert, but from what I understood when I did some digging a couple of months back is that there are 2 APIs:
CustomResourceDefinitions
in Argo's installation manifest. This API is used by the old client version and is also used by default by the argo CLIIn my use case, the argo-server is running in a pod on GKE. Sending requests to the Kubernetes cluster using the new API implementation did not work anymore. The argo docs recommend port-forwarding or setting up a load balancer in order to send requests to this API but both options were slightly annoying to set up.
So instead I opted for sending requests to the Kubernetes API without using the argo client library. Which was fine since I didn't actually use much of the functionality the client library provided. If it helps, here is the implementation of the workaround I ended up using: https://github.com/mozilla/jetstream/blob/13d035a3bf46653ab4358ee66af1bcd30947af56/jetstream/argo.py#L131
In relation to API's and other Argo client SDK:
For the python SDK, I've done some further digging and it appears that the API classes set auth_settings
to an empty list when caling ApiClient.call_api
, for example: https://github.com/argoproj-labs/argo-client-python/blob/master/argo/workflows/client/api/workflow_service_api.py#L146. All the classes under in the api module set empty auth settings.
After setting auth_settings via auth_settings = self.api_client.configuration.auth_settings()
I am able to authenticate with the argo-server (similar to entering the token from argo auth token
directly into the Web UI). This way I've had some success in authorizing with the argo-server API via a Bearer token generated from my kubeconfig.
So, when passing the proper auth settings in workflow_service_api.py
, the following example is working for me:
import yaml
import requests
import kubernetes
from argo.workflows.client import (ApiClient,
WorkflowServiceApi,
Configuration,
V1alpha1WorkflowCreateRequest)
config = Configuration()
kubernetes.config.load_kube_config(client_configuration=config)
# config.host = "https://argo-server.example:2746/" # when exposing argo-server behind a load balancer
config.host = "https://localhost:2746/" # when exposing argo-server via port forward `kubectl -n argo port-forward deployment/argo-server 2746:2746`
config.verify_ssl = False # my argo-server does not have a valid SSL certificate
client = ApiClient(configuration=config)
service = WorkflowServiceApi(api_client=client)
WORKFLOW = 'https://raw.githubusercontent.com/argoproj/argo/v2.12.2/examples/dag-diamond-steps.yaml'
resp = requests.get(WORKFLOW)
manifest: dict = yaml.safe_load(resp.text)
service.create_workflow('default', V1alpha1WorkflowCreateRequest(workflow=manifest))
As the API classes and ApiClient class are auto-generated via OpenAPI, I'm unsure how to set the proper auth_settings without patching every method. Patching ApiClient to check for empty auth_settings would be the most straight-forward approach. But maybe the input to openapi should be changed so that openapi's output correctly handles authentication?
Sticking to the k8s api with Argo's CRDs appears to be a good alternative to the python SDK, as you aren't forced to expose your in-cluster argo-server via either a port-forward or a load balancer. You also won't be impacted by the client SDK lacking behind the argo release (which has been the case for the python SDK), as long as Argo doesn't change CRDs. Also, the k8s api is directly accessible when using kubeconfig (with proper ssl certificates etc. already set), for argo-server you have to take care of this yourself (or run argo-server without SSL, which might not be recommended in a production environment?).
I can bring this back. The reason I took it away was to make it consistent with JAVA sdk @alexec if you can comment here would be great (via argo server API, but I did not mean to make it more difficult for most of the users actually using this SDK).
If k8s CRD is still the best way for you to use the SDK I can see to it to bring them back. However it won't be benefiting from Argo Server API
@fvdnabee I have been away for a bit and not sure about your context. Perhaps propose a PR?
There are two issues being discussed here:
argo-client-python
follows the other argo SDKs, which appear to have removed support for the CRDs? Python users wanting to access the Argo CRDs can use a generic k8s client with CRD support (as @scholtzan pointed out).auth_settings
. The argo-workflows swagger API defines BearerToken
as a security definition, but it doesn't appear to get picked up by the openapi generator. I can propose a PR for setting the auth_settings
via patching of the openapi gen output. But it might preferable if the openapi output would set the proper auth_settings directly? I'm unsure how this can be accomplished however. Perhaps someone more knowledgeable with openapi could shed some light on this? @alexec any ideas?
It looks like
BearerToken
security
got removed in 4.0.1 but used to be defined before in https://github.com/argoproj-labs/argo-client-python/blob/5fe8276ce2dd94237a5a4998211792b7e5e70249/openapi/swagger.json#L8. I'm not sure if this was intentionally, but using the most recent version for submitting a workflow results in a403 Forbidden
status code.