kubernetes / kops

Kubernetes Operations (kOps) - Production Grade k8s Installation, Upgrades and Management
https://kops.sigs.k8s.io/
Apache License 2.0
15.83k stars 4.64k forks source link

snapshot controller requires cert-manager #13659

Closed ashish1099 closed 1 year ago

ashish1099 commented 2 years ago

/kind bug

1. What kops version are you running? The command kops version, will display this information.

1.22.4

2. What Kubernetes version are you running? kubectl version will print the version if a cluster is running or provide the Kubernetes version specified as a kops flag.

Client Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.2", GitCommit:"faecb196815e248d3ecfb03c680a4507229c2a56", GitTreeState:"clean", BuildDate:"2021-01-13T13:28:09Z", GoVersion:"go1.15.5", Compiler:"gc", Platform:"linux/amd64"}

3. What cloud provider are you using?

aws

4. What commands did you run? What is the simplest way to reproduce this issue?

kops replace -v 10 -f cluster.yaml 

5. What happened after the commands executed?

Error: error replacing cluster: spec.snapshotController.enabled: Forbidden: Snapshot controller requires that cert manager is enabled

6. What did you expect to happen?

Expected it work without any error, when snapshot controller is enabled

7. Please provide your cluster manifest. Execute kops get --name my.example.com -o yaml to display your cluster manifest. You may want to remove your cluster name and other sensitive information.

kind: Cluster
metadata:
  creationTimestamp: "2022-04-21T02:43:35Z"
  name: k8s.staging.example.com
spec:
  api:
    loadBalancer:
      class: Classic
      crossZoneLoadBalancing: true
      idleTimeoutSeconds: 4000
      type: Internal
  authorization:
    rbac: {}
  channel: stable
  cloudConfig:
    awsEBSCSIDriver:
      enabled: false
  snapshotController:
    enabled: true
  cloudLabels:
    Cost Center: team-site-reliability
    Maintaining Team: team-site-reliability
    Purpose: BW7 staging
  cloudProvider: aws
  configBase: s3://kops.example.com/k8s.staging.example.com
  containerRuntime: containerd
  iam:
    allowContainerRegistry: true
    legacy: false
  kubeAPIServer:
    featureGates:
      CSIMigrationAWS: "true"
      EphemeralContainers: "true"
      InTreePluginAWSUnregister: "true"
  kubeControllerManager:
    featureGates:
      CSIMigrationAWS: "true"
      EphemeralContainers: "true"
      InTreePluginAWSUnregister: "true"
    logLevel: 3
  kubeProxy:
    metricsBindAddress: 0.0.0.0
    proxyMode: ipvs
  kubeScheduler:
    featureGates:
      CSIMigrationAWS: "true"
      EphemeralContainers: "true"
      InTreePluginAWSUnregister: "true"
  kubelet:
    anonymousAuth: false
    featureGates:
      CSIMigrationAWS: "true"
      EphemeralContainers: "true"
      InTreePluginAWSUnregister: "true"
  kubernetesApiAccess:
  - 0.0.0.0/0
  kubernetesVersion: 1.22.8
  masterInternalName: api.internal.k8s.staging.example.com
  masterPublicName: api.k8s.staging.example.com

8. Please run the commands with most verbose logging by adding the -v 10 flag. Paste the logs into this report, or in a gist and provide the gist link here.

kops replace -v 10 -f cluster.yaml 
I0517 12:03:11.543112   32018 factory.go:68] state store s3://kops.example.com
I0517 12:03:11.551509   32018 aws_cloud.go:1760] Querying EC2 for all valid zones in region "eu-east-10"
I0517 12:03:11.552251   32018 request_logger.go:45] AWS request: ec2/DescribeAvailabilityZones
I0517 12:03:12.288618   32018 status.go:57] Querying AWS for etcd volumes
I0517 12:03:12.288685   32018 status.go:68] Listing EC2 Volumes
I0517 12:03:12.289139   32018 request_logger.go:45] AWS request: ec2/DescribeVolumes
I0517 12:03:12.537859   32018 status.go:40] Cluster status (from cloud): {"etcdClusters":[{"name":"events","etcdMembers":[{"name":"a","volumeId":"vol-xx"},{"name":"c","volumeId":"vol-xx"},{"name":"b","volumeId":"vol-xx"}]},{"name":"main","etcdMembers":[{"name":"a","volumeId":"vol-xx"},{"name":"c","volumeId":"vol-xx"},{"name":"b","volumeId":"vol-xx"}]}]}
I0517 12:03:12.538827   32018 s3context.go:334] unable to read /sys/devices/virtual/dmi/id/product_uuid, assuming not running on EC2: open /sys/devices/virtual/dmi/id/product_uuid: permission denied
I0517 12:03:13.996452   32018 s3context.go:166] unable to get region from metadata:unable to get region from metadata: EC2MetadataRequestError: failed to get EC2 instance identity document
caused by: RequestError: send request failed
caused by: Get "http://169.254.169.254/latest/dynamic/instance-identity/document": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
I0517 12:03:13.996517   32018 s3context.go:176] defaulting region to "eu-east-10"
I0517 12:03:14.959086   32018 s3context.go:216] found bucket in region "eu-west-10"
I0517 12:03:14.963263   32018 s3fs.go:327] Reading file "s3://kops.example.com/k8s.staging.example.com/config"
I0517 12:03:15.675185   32018 s3fs.go:327] Reading file "s3://kops.example.com/k8s.staging.example.com/config"
Error: error replacing cluster: spec.snapshotController.enabled: Forbidden: Snapshot controller requires that cert manager is enabled

9. Anything else do we need to know?

I'm not sure why we need cert-manager to enable the snapshot controller ?

olemarkus commented 2 years ago

Snapshot-controller has a webhook that requires a valid certificate. /remove-kind bug /kind support

k8s-triage-robot commented 2 years ago

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot commented 2 years ago

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

k8s-triage-robot commented 1 year ago

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

k8s-ci-robot commented 1 year ago

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to [this](https://github.com/kubernetes/kops/issues/13659#issuecomment-1279693632): >The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. > >This bot triages issues according to the following rules: >- After 90d of inactivity, `lifecycle/stale` is applied >- After 30d of inactivity since `lifecycle/stale` was applied, `lifecycle/rotten` is applied >- After 30d of inactivity since `lifecycle/rotten` was applied, the issue is closed > >You can: >- Reopen this issue with `/reopen` >- Mark this issue as fresh with `/remove-lifecycle rotten` >- Offer to help out with [Issue Triage][1] > >Please send feedback to sig-contributor-experience at [kubernetes/community](https://github.com/kubernetes/community). > >/close not-planned > >[1]: https://www.kubernetes.dev/docs/guide/issue-triage/ Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes/test-infra](https://github.com/kubernetes/test-infra/issues/new?title=Prow%20issue:) repository.