Azure / aks-set-context

GitHub Action for setting context (retrieving Kubeconfig) before interacting with Kubernetes cluster
MIT License
48 stars 37 forks source link

Upgrading from v1 to v3 error #87

Closed stuarthendren closed 1 year ago

stuarthendren commented 2 years ago

I've been using v1 in my github actions with no problems but started to get a warning from github that node 12 was deprecated and I should update this action. So I tried to move to v3 but can not get it to work so I can run kubectl command successfully.

The v1 set up was:

   - uses: azure/aks-set-context@v1
        with:
          creds: "${{ secrets.CREDENTIALS }}"
          cluster-name: ${{ secrets.CLUSTER_NAME }}
          resource-group: ${{ secrets.RESOURCE_GROUP }}

the v3 set up - I've tried all combinations of admin and use-kubelogin for example:

      - name: Azure Login
        uses: azure/login@v1
        with:
          creds: "${{ secrets.CREDENTIALS }}"

      - name: Azure kubelogin
        run: |
          curl -LO https://github.com/Azure/kubelogin/releases/download/v0.0.9/kubelogin-linux-amd64.zip
          sudo unzip -j kubelogin-linux-amd64.zip -d /usr/local/bin
          rm -f kubelogin-linux-amd64.zip
          kubelogin --version

     - uses: azure/aks-set-context@v3
          with:
            cluster-name: ${{ secrets.CLUSTER_NAME }}
            resource-group: ${{ secrets.RESOURCE_GROUP }}
            admin: "false"
            use-kubelogin: "true"

With these there is no error during login or via the aks-set-context but I get the error

Error from server (Forbidden): pods "my-pod-0" is forbidden: User "xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" cannot get resource "pods" in API group "" in the namespace "my-namespace": User does not have access to the resource in Azure. Update role assignment to allow access.

This user does have access - supported by the fact it works in the v1 case.

I'm not sure the problem lies with this action? but if not do you know why this could happen?

OliverMKing commented 2 years ago

Hi @stuarthendren! Thanks for the issue.

The v1 scenario can be emulated by using the admin: true flag.

     - uses: azure/aks-set-context@v3
          with:
            cluster-name: ${{ secrets.CLUSTER_NAME }}
            resource-group: ${{ secrets.RESOURCE_GROUP }}
            admin: "true"

Does this work for you?

If not try bumping the kubelogin version to the latest one with

      - name: Azure kubelogin
        run: |
          curl -LO https://github.com/Azure/kubelogin/releases/download/v0.0.20/kubelogin-linux-amd64.zip
          sudo unzip -j kubelogin-linux-amd64.zip -d /usr/local/bin
          rm -f kubelogin-linux-amd64.zip
          kubelogin --version
stuarthendren commented 2 years ago

Hi @OliverMKing - Adding admin fixed it.

I had tried that already but because the script still failed I just put it down to the context being incorrect. However, I had missed that it changes the cluster name to **-admin, that caused later things to fail and I mistakenly thought it still wasn't logged in correctly. I see the logic behind changing the cluster-name I just didn't notice it the first time. I also see that most of the time this would be just set in the config and kubectl would pick it up correctly and everything would work. It was only that I had a part of the downstream tasks that used the cluster name separately that it failed.

Maybe this change should be mentioned in the readme? But I'm happy for you to close this issue, thanks for your help.

github-actions[bot] commented 2 years ago

This issue is idle because it has been open for 14 days with no activity.