medic / cht-user-management

GNU Affero General Public License v3.0
3 stars 1 forks source link

Allow app services teammates to deploy to production via helm scripts #36

Closed kennsippell closed 6 months ago

kennsippell commented 7 months ago

See "deploy new version". Daniel and I wrote those docs and tested them, but I'd love it if someone else could test! Note the the "Requirements" section above.

kennsippell commented 6 months ago

Hey @mrjones-plip. I've completed all prerequisites.

helm version
version.BuildInfo{Version:"v3.14.0", GitCommit:"3fc9f4b2638e76f26739cd77c7017139be81d0ea", GitTreeState:"clean", GoVersion:"go1.21.5"}
aws --version
aws-cli/1.22.34 Python/3.10.12 Linux/6.2.0-39-generic botocore/1.23.34

I'm looking into this... Not sure what this means

kubectl version
error: exec plugin: invalid apiVersion "client.authentication.k8s.io/v1alpha1"
~/medic-infrastructure/terraform/aws/dev/eks/access/eks-aws-mfa-login kennsippel 947154
To finish so kubectl uses correct AWS_PROFILE please export that env var. i.e: export AWS_PROFILE=kennsippel
  Assumed EKS role for kennsippel. kubectl -n kennsippel-dev get pods should now work. Or you have been notified about other namespaces you have access to.
If this is your first time logging in, please run aws eks update-kubeconfig from README instructions found in this repo

The command kubectl -n kennsippel-dev does not work... it just outputs the help page (?)

And I'm unable to deploy:

helm upgrade \
      --kube-context arn:aws:eks:eu-west-2:720541322708:cluster/prod-cht-eks \
      --namespace users-chis-prod \
      --values values/users-chis-ke.yaml \
      users-chis-ke medic/cht-user-management
Error: UPGRADE FAILED: Kubernetes cluster unreachable: context "arn:aws:eks:eu-west-2:720541322708:cluster/prod-cht-eks" does not exist
mrjones-plip commented 6 months ago

Hey @kennsippell - Awesome you're working through this \o/

I think the error from kubectl version needs to be resolved first. Here's what I get with kubectl version --short:

 kubectl version --short

Flag --short has been deprecated, and will be removed in the future. The --short output will become the default.
Client Version: v1.27.4
Kustomize Version: v5.0.1
Server Version: v1.24.17-eks-8cb36c9
WARNING: version difference between client (1.27) and server (1.24) exceeds the supported minor version skew of +/-1

The command kubectl -n kennsippel-dev does not work... it just outputs the help page (?)

Yes - you're missing a verb (eg get) and a noun (eg deployments). Here's what you might expect from that (though your output may be empty as you haven't deployed anything):

kubectl get deployments -n mrjones-dev    
NAME                               READY   UP-TO-DATE   AVAILABLE   AGE
watchdog-grafana                   1/1     1            1           76d
watchdog-kube-state-metrics        1/1     1            1           62d
watchdog-prometheus-alertmanager   1/1     1            1           62d
watchdog-prometheus-pushgateway    1/1     1            1           62d
watchdog-prometheus-server         1/1     1            1           62d

And I'm unable to deploy:

For real debugging, I think you'll need to get with one of the Infra team to help debug and ensure your perms are correct. I've started a thread in Slack.

mrjones-plip commented 6 months ago

@kenn - separate from your perms, please let me know if you need me to push either KE or UG versions to production.

kennsippell commented 6 months ago

@mrjones-plip If this is going to be a while, it'd be great if you could deploy 1.0.8 for both KE and UG.

mrjones-plip commented 6 months ago

@kennsippell - great - let's do it! I need to bump the values.yaml to 1.0.8 so I have a quickie PR to do that.

I'll go ahead and push and report back.

kennsippell commented 6 months ago

Thanks Jonesy. Support on permission issues is needed from either @medic/site-reliability-engineering or @nydr Can we give @freddieptf the same permissions?

mrjones-plip commented 6 months ago

Push to prod for UG and KE is complete:

➜  cht-user-management git:(bump-all-to-1.0.8) /home/mrjones/Documents/MedicMobile/medic-infrastructure/terraform/aws/dev/eks/access/eks-aws-mfa-login mrjones 819078                   [338/23375]
To finish so kubectl uses correct AWS_PROFILE please export that env var. i.e: export AWS_PROFILE=mrjones                                         
Assumed EKS role for mrjones. kubectl -n mrjones-dev get pods should now work. Or you have been notified about other namespaces you have access to.
If this is your first time logging in, please run aws eks update-kubeconfig from README instructions found in this repo  

➜  deploy git:(bump-all-to-1.0.8) helm upgrade \
      --kube-context arn:aws:eks:eu-west-2:720541322708:cluster/prod-cht-eks \
      --namespace users-chis-prod \
      --values values/users-chis-ug.yaml \
      users-chis-ug /home/mrjones/Documents/MedicMobile/helm-charts/charts/cht-user-management
Release "users-chis-ug" has been upgraded. Happy Helming!
NAME: users-chis-ug
LAST DEPLOYED: Wed Jan 31 11:13:05 2024
NAMESPACE: users-chis-prod
STATUS: deployed
REVISION: 3

➜  deploy git:(bump-all-to-1.0.8) helm upgrade \                
      --kube-context arn:aws:eks:eu-west-2:720541322708:cluster/prod-cht-eks \
      --namespace users-chis-prod \
      --values values/users-chis-ke.yaml \ 
      users-chis-ke /home/mrjones/Documents/MedicMobile/helm-charts/charts/cht-user-management    
Release "users-chis-ke" has been upgraded. Happy Helming!
NAME: users-chis-ke
LAST DEPLOYED: Wed Jan 31 11:16:17 2024
NAMESPACE: users-chis-prod
STATUS: deployed
REVISION: 8
mrjones-plip commented 6 months ago

@kennsippell - let's follow up with your perms in this ticket.

mrjones-plip commented 6 months ago

re-opening and deffering to @kennsippell how he wants to proceed!

kennsippell commented 6 months ago

@nydr @mrjones-plip Sorry, I'm still struggling with this error.

kubectl version 
error: exec plugin: invalid apiVersion "client.authentication.k8s.io/v1alpha1"

I did the following:

  1. Uninstall kubectl 1.29. Reinstall 1.24 (in accordance with guidance here)
  2. Delete ~/.kube/config
  3. Run steps 1-4 here

I'm not really sure what this means or what I should do. This thread seems to indicate it may be an issue with configmap (?) Any suggestions?

nydr commented 6 months ago

Is this the full output of kubectl version? I'd expect it to list client version even if unable to connect to server

Most likely you have a local k8s installation that requires a different version, are you using docker desktop with kubernetes enabled or k3d by any chance? What does kubectl config get-contexts output? Edit: This message is likely caused by kube config trying to use a version of client.authentication.k8s.io not supported by the local kubectl install

kennsippell commented 6 months ago

This is indeed the full output of kubectl version. Maybe it's shorter on 1.24?

If I do kubectl version --client I can see more

kubectl version --client
WARNING: This version information is deprecated and will be replaced with the output from kubectl version --short.  Use --output=yaml|json to get the full version.
Client Version: version.Info{Major:"1", Minor:"24", GitVersion:"v1.24.17", GitCommit:"22a9682c8fe855c321be75c5faacde343f909b04", GitTreeState:"clean", BuildDate:"2023-08-23T23:44:35Z", GoVersion:"go1.20.7", Compiler:"gc", Platform:"linux/amd64"}
Kustomize Version: v4.5.4

I don't have any k8 installations. I'm not using docker desktop and I didn't know what k3d is until right now.

kubectl config get-contexts
CURRENT   NAME                                                      CLUSTER                                                   AUTHINFO                                                  NAMESPACE
          arn:aws:eks:eu-west-2:720541322708:cluster/dev-cht-eks    arn:aws:eks:eu-west-2:720541322708:cluster/dev-cht-eks    arn:aws:eks:eu-west-2:720541322708:cluster/dev-cht-eks    
*         arn:aws:eks:eu-west-2:720541322708:cluster/prod-cht-eks   arn:aws:eks:eu-west-2:720541322708:cluster/prod-cht-eks   arn:aws:eks:eu-west-2:720541322708:cluster/prod-cht-eks   

Thanks for caring :D

nydr commented 6 months ago

Thanks for the update

A) I re-ran the command for the auth ConfigMap on the prod cluster, could you try again?

B) Does it work if you change the current cluster to dev-cht-eks? kubectl config set-context arn:aws:eks:eu-west-2:720541322708:cluster/dev-cht-eks

C) I'm using:

Client Version: version.Info{Major:"1", Minor:"25", GitVersion:"v1.25.15", GitCommit:"da6089da4974a0a180c226c9353e1921fa3c248a", GitTreeState:"clean", BuildDate:"2023-10-18T13:40:02Z", GoVersion:"go1.20.10", Compiler:"gc", Platform:"darwin/arm64"}
Kustomize Version: v4.5.7

Can be worth trying that version (or latest 1.25) if it persists

kennsippell commented 6 months ago

Thanks for changing the configmap.

$ kubectl version
error: exec plugin: invalid apiVersion "client.authentication.k8s.io/v1alpha1"

Sadly looks same :(

$ kubectl config set-context arn:aws:eks:eu-west-2:720541322708:cluster/dev-cht-eks
Context "arn:aws:eks:eu-west-2:720541322708:cluster/dev-cht-eks" modified.
$ kubectl version
error: exec plugin: invalid apiVersion "client.authentication.k8s.io/v1alpha1"

I'll try C now and report back - but for the record I was also seeing this on 1.29 prior to the downgrade I did.

nydr commented 6 months ago

Ok, could you verify your ~/.kube/config args? Mine looks like this:

- name: arn:aws:eks:eu-west-2:720541322708:cluster/prod-cht-eks
  user:
    exec:
      apiVersion: client.authentication.k8s.io/v1beta1
      args:
      - --region
      - eu-west-2
      - eks
      - get-token
      - --cluster-name
      - prod-cht-eks
      - --output
      - json
      command: aws
      env:
      - name: AWS_PROFILE
        value: your_aws_profilename

Also check there's no AWS environment variables overriding your configuration (env | grep AWS_)

nydr commented 6 months ago

Another thing to check would be awscli and version

can you run

aws --region eu-west-2 eks get-token --cluster-name dev-cht-eks

For me it returns

{
    "kind": "ExecCredential",
    "apiVersion": "client.authentication.k8s.io/v1beta1",
    "spec": {},
    "status": {
        "expirationTimestamp": "2024-02-12T10:48:50Z",
        "token": "k8s-aws-v1.xyz..."
}

exact version of aws-cli shouldn't matter unless it's a really old one, mine is

❯ aws --version
aws-cli/2.15.17 Python/3.11.7 Darwin/23.2.0 source/arm64 prompt/off
kennsippell commented 6 months ago
$ aws --version
aws-cli/1.22.34 Python/3.10.12 Linux/6.2.0-39-generic botocore/1.23.34

Noticing a different apiVersion here:

$ aws --region eu-west-2 eks get-token --cluster-name dev-cht-eks | jq
{
  "kind": "ExecCredential",
  "apiVersion": "client.authentication.k8s.io/v1alpha1",
  "spec": {},
  "status": {
    "expirationTimestamp": "2024-02-12T14:18:24Z",
    "token": "k8s-aws-..."
  }
}

Again, everything is looking the same except the API version:

$ cat ~/.kube/config 
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: x
    server: https://23B4021D5F26E3760D05A01DA520DBA3.gr7.eu-west-2.eks.amazonaws.com
  name: arn:aws:eks:eu-west-2:720541322708:cluster/dev-cht-eks
- cluster:
    certificate-authority-data: x
    server: https://5B761F26D71E10865BDB7E7344BD669E.gr7.eu-west-2.eks.amazonaws.com
  name: arn:aws:eks:eu-west-2:720541322708:cluster/prod-cht-eks
contexts:
- context:
    cluster: arn:aws:eks:eu-west-2:720541322708:cluster/dev-cht-eks
    user: arn:aws:eks:eu-west-2:720541322708:cluster/dev-cht-eks
  name: arn:aws:eks:eu-west-2:720541322708:cluster/dev-cht-eks
- context:
    cluster: arn:aws:eks:eu-west-2:720541322708:cluster/prod-cht-eks
    user: arn:aws:eks:eu-west-2:720541322708:cluster/prod-cht-eks
  name: arn:aws:eks:eu-west-2:720541322708:cluster/prod-cht-eks
current-context: arn:aws:eks:eu-west-2:720541322708:cluster/prod-cht-eks
kind: Config
preferences: {}
users:
- name: arn:aws:eks:eu-west-2:720541322708:cluster/dev-cht-eks
  user:
    exec:
      apiVersion: client.authentication.k8s.io/v1alpha1
      args:
      - --region
      - eu-west-2
      - eks
      - get-token
      - --cluster-name
      - dev-cht-eks
      command: aws
      env:
      - name: AWS_PROFILE
        value: kennsippel
- name: arn:aws:eks:eu-west-2:720541322708:cluster/prod-cht-eks
  user:
    exec:
      apiVersion: client.authentication.k8s.io/v1alpha1
      args:
      - --region
      - eu-west-2
      - eks
      - get-token
      - --cluster-name
      - prod-cht-eks
      command: aws
      env:
      - name: AWS_PROFILE
        value: kennsippel

Nothing here:

$ env | grep AWS
nydr commented 6 months ago

Could you try upgrading aws cli to v2 and re-running step 3-4?

https://aws.amazon.com/cli/ (note that versions above 1 it no longer distributed through pip afaik)

kennsippell commented 6 months ago

Wow. My mistake! I updated awscli, deleted ~/.kube/config, reran the steps, and I think I'm all good!

Release "users-chis-ke" has been upgraded. Happy Helming!
NAME: users-chis-ke
LAST DEPLOYED: Mon Feb 12 23:56:51 2024
NAMESPACE: users-chis-prod
STATUS: deployed
REVISION: 9

Thanks so much!!

kennsippell commented 6 months ago

@freddieptf Based on this I believe you are good to go. If this isn't the case, please reactivate the ticket.