Open mvineza opened 6 months ago
I checked this again and notice that the "rancherv2" secret is supposed to be under fleet-default
namespace.
As a workaround, I create a secret under that namespace inside the upstream (local) cluster.
apiVersion: v1
data:
s3credentialConfig-accessKey: dGVzdHRlc3R0ZXN0dGVzdHRlc3R0ZXN0dGVzdA==
s3credentialConfig-defaultSkipSSLVerify: ZmFsc2U=
s3credentialConfig-secretKey: dGVzdHRlc3R0ZXN0dGVzdHRlc3R0ZXN0dGVzdA==
kind: Secret
metadata:
name: rancherv2
namespace: fleet-default
type: Opaque
Not sure if I'm missing something on my terraform code or this secret must be handled and created separately.
Rancher Server Setup
Information about the Cluster
We are seeing the following issue on Rancher UI after pointing etcd snapshots location to our storage appliance.
To Reproduce
Add the following config on the terraform module.
resource "rancher2_cloud_credential" "this" { name = "rancherv2" s3_credential_config { access_key = var.etcd_snapshots_access_key secret_key = var.etcd_snapshots_secret_key } }
...
varibles.tf
...
variable "etcd_snapshots_bucket" { description = "S3 bucket name" type = string default = "k8s-etcd-snapshots" }
variable "etcd_snapshots_access_key" { description = "S3 access key for writing snapshots" type = string }
variable "etcd_snapshots_secret_key" { description = "S3 secret key for writing snapshots" type = string }
variable "etcd_snapshots_endpoint" { description = "S3 endpoint" type = string default = "https://storage.local" }
variable "etcd_snapshots_retention" { description = "Snapshot retention" type = number default = 5 }
variable "etcd_snapshots_schedule" { description = "Snapshot schedule in cron syntax" type = string default = "0 /5 " }