ververica / ververica-platform-playground

Instructions for getting started with Ververica Platform on minikube.
https://docs.ververica.com/getting_started/index.html
Apache License 2.0
89 stars 39 forks source link

Pass service account to taskmanager and jobmanager pods #22

Closed cyrilou242 closed 4 years ago

cyrilou242 commented 4 years ago

Hello, I (and @sysC0D) have a hard time understanding how we are supposed to pass service accounts to taskmanager and jobmanager pods.

We defined a service account flink-project1, which is supposed to be used for pods in our project1 node-pool, where vvp-jobs pods are run.

$ kubectl -n vvp-jobs describe serviceaccounts flink-project1
Name:                flink-project1
Namespace:           vvp-jobs
Labels:              <none>
Annotations:         iam.gke.io/gcp-service-account: flink-project1@staging-project.iam.gserviceaccount.com
Image pull secrets:  <none>
Mountable secrets:   flink-project1-token-jjkrt
Tokens:              flink-project1-token-jjkrt
Events:              <none>

But it looks like when pods are launched, service accounts are generated for them: for instance for task manager we can see such service account being generated and used :

$ kubectl -n vvp-jobs describe serviceaccount job-0fbe6901-2295-4bd0-8067-1b1c90906041-flink-ha-taskmanager
Name:                job-0fbe6901-2295-4bd0-8067-1b1c90906041-flink-ha-taskmanager
Namespace:           vvp-jobs
Labels:              app=flink-job
                     deploymentId=468922bc-bc27-4c15-b1dd-5ad143547fcb
                     jobId=0fbe6901-2295-4bd0-8067-1b1c90906041
                     system=ververica-platform
Annotations:         <none>
Image pull secrets:  <none>
Mountable secrets:   job-0fbe6901-2295-4bd0-8067-1b1c90906041-flink-ha-taskmana5ktb4
Tokens:              job-0fbe6901-2295-4bd0-8067-1b1c90906041-flink-ha-taskmana5ktb4
Events:              <none>

We can see it the the taskmanager deployment yaml config:

apiVersion: apps/v1
kind: Deployment
metadata:
  annotations:
    deployment.kubernetes.io/revision: "1"
  creationTimestamp: "2020-05-04T15:56:25Z"
  generation: 1
  labels:
    app: flink-job
    component: taskmanager
    deploymentId: 4364f18c-a074-4c4c-9cce-31a219169a48
    jobId: c3961d20-133d-4716-aad5-632a652ee41a
    system: ververica-platform
  name: job-c3961d20-133d-4716-aad5-632a652ee41a-taskmanager
  namespace: vvp-jobs
spec:
  template:
    metadata:
      annotations:
        kubernetes.io/service-account.name: flink-diddykong
    spec:
      dnsPolicy: ClusterFirst
      initContainers:
      - args:
        - some_args
      nodeSelector:
        nodepool-role: project1
      serviceAccount: job-c3961d20-133d-4716-aad5-632a652ee41a-flink-ha-taskmanager
      serviceAccountName: job-c3961d20-133d-4716-aad5-632a652ee41a-flink-ha-taskmanager
status:
  availableReplicas: 1

(yaml is not complete) Notice serviceAccount and serviceAccountName. (service accounts don't have the same names in my kubectl command and in the yaml because they do not come from the same deployment but you get the idea)

We can see the same in the jobmanager Job yaml config:

apiVersion: batch/v1
kind: Job
metadata:
  creationTimestamp: "2020-05-04T15:56:25Z"
  labels:
    app: flink-job
    component: jobmanager
    deploymentId: 4364f18c-a074-4c4c-9cce-31a219169a48
    jobId: c3961d20-133d-4716-aad5-632a652ee41a
    system: ververica-platform
  name: job-c3961d20-133d-4716-aad5-632a652ee41a-jobmanager
  namespace: vvp-jobs
spec:
  template:
    metadata:
      annotations:
        kubernetes.io/service-account.name: flink-project1
      creationTimestamp: null
      labels:
        app: flink-job
    spec:
      containers:
      - args:
        - some_args
      dnsPolicy: ClusterFirst
      nodeSelector:
        nodepool-role: project1
      serviceAccount: job-c3961d20-133d-4716-aad5-632a652ee41a-flink-ha-jobmanager
      serviceAccountName: job-c3961d20-133d-4716-aad5-632a652ee41a-flink-ha-jobmanager
      terminationGracePeriodSeconds: 30
status:
  active: 1
  startTime: "2020-05-04T15:56:25Z"

Would there be a way to pass our flink-project1 service account instead of the generated service accounts ?

knaufk commented 4 years ago

Hi @cyrilou242,

unfortunately, custom ServiceAccounts are not supported at the moment. We might be able to squeeze this into our next minor release as you are not the only one looking for this.

Please see https://docs.ververica.com/user_guide/deployments/configure_kubernetes for what is currently possible. Do you have a way to work around this issue for now?

Best,

Konstantin

cyrilou242 commented 4 years ago

Hello @knaufk than you for your answer. We'd rather not rely on service account keys passed inside the kube, and opening our gcp components (in a private network) is not an option either.

I'll see what can be done, but If the feature is coming soon I think we can wait a bit.

edit: we'll begin with service account keys passed inside the kube

cyrilou242 commented 4 years ago

So we did it by passing a service account key and setting an environment variable.

Thanks for your help again!