kubernetes / kops

Kubernetes Operations (kOps) - Production Grade k8s Installation, Upgrades and Management
https://kops.sigs.k8s.io/
Apache License 2.0
15.92k stars 4.65k forks source link

local assets fileRepository not working #15104

Closed benedikt-bartscher closed 1 year ago

benedikt-bartscher commented 1 year ago

/kind bug

1. What kops version are you running? The command kops version, will display this information. Client version: 1.25.3 (git-v1.25.3)

2. What Kubernetes version are you running? kubectl version will print the version if a cluster is running or provide the Kubernetes version specified as a kops flag. 1.25.6

3. What cloud provider are you using? aws

4. What commands did you run? What is the simplest way to reproduce this issue? add this to your cluster spec

assets:
    fileRepository: https://BUCKETNAME.s3.REGION.amazonaws.com

where BUCKETNAME is a s3 bucket which kops has access to and REGION is the aws region of the bucket. Next run: kops get assets --copy Which sucessfully copies all file assets to s3. Then run: kops update cluster which results in this error: Error: you might have not staged your files correctly, please execute 'kops get assets --copy'

5. What happened after the commands executed? Error: you might have not staged your files correctly, please execute 'kops get assets --copy'

6. What did you expect to happen? cluster gets updated

7. Please provide your cluster manifest. Execute kops get --name my.example.com -o yaml to display your cluster manifest. You may want to remove your cluster name and other sensitive information.

apiVersion: kops.k8s.io/v1alpha2
kind: Cluster
metadata:
  creationTimestamp: **********
  generation: **********
  name: **********
spec:
  api:
    dns: {}
  assets:
    fileRepository: https://**********.s3.**********.amazonaws.com
  authorization:
    rbac: {}
  awsLoadBalancerController:
    enabled: true
  certManager:
    enabled: true
    managed: false
  channel: stable
  cloudProvider: aws
  clusterAutoscaler:
    awsUseStaticInstanceList: false
    balanceSimilarNodeGroups: true
    cpuRequest: 100m
    enabled: true
    expander: least-waste
    image: k8s.gcr.io/autoscaling/cluster-autoscaler:v1.25.0
    memoryRequest: 300Mi
    newPodScaleUpDelay: 0s
    scaleDownDelayAfterAdd: 15m0s
    scaleDownUnneededTime: 15m0s
    scaleDownUnreadyTime: 15m0s
    scaleDownUtilizationThreshold: "0.6"
    skipNodesWithLocalStorage: false
    skipNodesWithSystemPods: false
  configBase: s3://**********
  containerd:
    registryMirrors:
      docker.io:
      - https://**********
  etcdClusters:
  - cpuRequest: 200m
    etcdMembers:
    - encryptedVolume: true
      instanceGroup: master-**********
      name: a
    - encryptedVolume: true
      instanceGroup: master-**********
      name: b
    - encryptedVolume: true
      instanceGroup: master-**********
      name: c
    memoryRequest: 200Mi
    name: main
  - cpuRequest: 100m
    etcdMembers:
    - encryptedVolume: true
      instanceGroup: master-**********
      name: a
    - encryptedVolume: true
      instanceGroup: master-**********
      name: b
    - encryptedVolume: true
      instanceGroup: master-**********
      name: c
    memoryRequest: 200Mi
    name: events
  fileAssets:
  - content: |
      search **********
      nameserver **********
      options ndots:3
    name: resolv-conf-**********
    path: /etc/resolv.conf.**********
    roles:
    - Master
    - Node
    - Bastion
  iam:
    allowContainerRegistry: true
    legacy: false
    serviceAccountExternalPermissions:
    - aws:
        policyARNs:
        - arn:aws:iam::aws:policy**********
      name: pod-identity-webhook-test
      namespace: default
    - aws:
        policyARNs:
        - arn:aws:iam::**********
      name: **********
      namespace: default
    - aws:
        policyARNs:
        - arn:aws:iam::**********
      name: efs-csi-controller-sa
      namespace: kube-system
    - aws:
        policyARNs:
        - arn:aws:iam::**********
      name: efs-csi-node-sa
      namespace: kube-system
  kubeDNS:
    nodeLocalDNS:
      cpuRequest: 30m
      enabled: true
      memoryRequest: 25Mi
    provider: CoreDNS
  kubeProxy:
    metricsBindAddress: 0.0.0.0
  kubelet:
    anonymousAuth: false
    maxPods: 100
  kubernetesApiAccess:
  - **********
  kubernetesVersion: 1.25.6
  masterInternalName: api.**********
  masterPublicName: api.**********
  metricsServer:
    enabled: false
  networkCIDR: **********
  networking:
    amazonvpc:
      env:
      - name: ENABLE_PREFIX_DELEGATION
        value: "true"
      - name: WARM_PREFIX_TARGET
        value: "1"
  nodeProblemDetector:
    cpuRequest: 10m
    enabled: true
    memoryRequest: 32Mi
  nodeTerminationHandler:
    cpuRequest: 200m
    enableRebalanceMonitoring: false
    enableSQSTerminationDraining: true
    enableSpotInterruptionDraining: true
    enabled: true
    managedASGTag: aws-node-termination-handler/managed
    prometheusEnable: true
  nonMasqueradeCIDR: **********
  podIdentityWebhook:
    enabled: true
  serviceAccountIssuerDiscovery:
    discoveryStore: s3://**********
    enableAWSOIDCProvider: true
  sshAccess:
  - 0.0.0.0/0
  - ::/0
  subnets:
  - cidr: **********
    name: **********
    type: Public
    zone: **********
  - cidr: **********
    name: **********
    type: Public
    zone: **********
  - cidr: **********
    name: **********
    type: Public
    zone: **********
  topology:
    dns:
      type: Public
    masters: public
    nodes: public

8. Please run the commands with most verbose logging by adding the -v 10 flag. Paste the logs into this report, or in a gist and provide the gist link here.

9. Anything else do we need to know? No, just thanks for all your great work :)

hakman commented 1 year ago

Could you add a short comment about how you solved the problem. Thanks!