42atomys / stud42

Stud42 official repository since major 3 update (https://s42.app)
https://s42.app
MIT License
859 stars 27 forks source link

fix: all cli command try to connect to cache and database #438

Closed 42atomys closed 1 year ago

42atomys commented 1 year ago

Describe the pull request This pull request addresses an issue where all command-line interface (CLI) commands attempt to connect to the cache and database, regardless of whether these connections are necessary for the command. The fix involves updating the CLI command behavior, so that each command only attempts to connect to the cache and database when required.

Checklist

Breaking changes ? no

github-actions[bot] commented 1 year ago

Terraform data for sandbox stack

Terraform Initialization ⚙️ success

Terraform Validation 🤖 success

Show Validation ``` Success! The configuration is valid. ```

Terraform Plan 📖 success

Show Plan ``` kubernetes_config_map.stud42_config: Refreshing state... [id=sandbox/stud42-config] module.istio.kubectl_manifest.virtual_services["dev-s42-sandbox"]: Refreshing state... [id=/apis/networking.istio.io/v1alpha3/namespaces/sandbox/virtualservices/dev-s42-sandbox] module.jwtks_service.kubernetes_service.app[0]: Refreshing state... [id=sandbox/jwtks-service] module.jwtks_service.kubernetes_deployment.app[0]: Refreshing state... [id=sandbox/jwtks-service] module.jwtks_service.kubernetes_horizontal_pod_autoscaler_v2.app[0]: Refreshing state... [id=sandbox/jwtks-service] module.jwtks_service.kubernetes_manifest.certificate["grpc-internal"]: Refreshing state... No changes. Your infrastructure matches the configuration. Terraform has compared your real infrastructure against your configuration and found no differences, so no changes are needed. ```
github-actions[bot] commented 1 year ago

Terraform data for pre-cluster stack

Terraform Initialization ⚙️ success

Terraform Validation 🤖 success

Show Validation ``` Success! The configuration is valid. ```

Terraform Plan 📖 success

Show Plan ``` helm_release.sealed_secret: Refreshing state... [id=sealed-secret] helm_release.reflector: Refreshing state... [id=reflector] helm_release.istio_base: Refreshing state... [id=istio-base] helm_release.rabbitmq_operator: Refreshing state... [id=primary] kubernetes_namespace.namespace["istio-system"]: Refreshing state... [id=istio-system] kubernetes_namespace.namespace["sandbox"]: Refreshing state... [id=sandbox] kubernetes_namespace.namespace["previews"]: Refreshing state... [id=previews] kubernetes_namespace.namespace["permission-manager"]: Refreshing state... [id=permission-manager] kubernetes_namespace.namespace["cert-manager"]: Refreshing state... [id=cert-manager] kubernetes_namespace.namespace["staging"]: Refreshing state... [id=staging] kubernetes_namespace.namespace["production"]: Refreshing state... [id=production] kubernetes_namespace.namespace["monitoring"]: Refreshing state... [id=monitoring] helm_release.istiod: Refreshing state... [id=istiod] helm_release.gateway: Refreshing state... [id=istio-ingressgateway] No changes. Your infrastructure matches the configuration. Terraform has compared your real infrastructure against your configuration and found no differences, so no changes are needed. ```
github-actions[bot] commented 1 year ago

Terraform data for cluster stack

Terraform Initialization ⚙️ success

Terraform Validation 🤖 success

Show Validation ``` Success! The configuration is valid. ```

Terraform Plan 📖 success

Show Plan ``` module.istio.kubectl_manifest.gateways["app-s42-dashboards"]: Refreshing state... [id=/apis/networking.istio.io/v1alpha3/namespaces/monitoring/gateways/app-s42-dashboards] kubernetes_cluster_role.prometheus: Refreshing state... [id=prometheus] module.istio.kubectl_manifest.gateways["dev-s42-sandbox"]: Refreshing state... [id=/apis/networking.istio.io/v1alpha3/namespaces/sandbox/gateways/dev-s42-sandbox] kubernetes_cluster_role.promtail: Refreshing state... [id=promtail] module.istio.kubectl_manifest.gateways["dev-s42-previews"]: Refreshing state... [id=/apis/networking.istio.io/v1alpha3/namespaces/previews/gateways/dev-s42-previews] module.cert_manager.null_resource.cert_manager_ovh_source: Refreshing state... [id=6901452211892208863] module.istio.kubectl_manifest.gateways["app-s42-next"]: Refreshing state... [id=/apis/networking.istio.io/v1alpha3/namespaces/staging/gateways/app-s42-next] module.istio.kubectl_manifest.gateways["app-s42"]: Refreshing state... [id=/apis/networking.istio.io/v1alpha3/namespaces/production/gateways/app-s42] kubernetes_service_account.promtail: Refreshing state... [id=monitoring/promtail] kubernetes_service_account.tempo: Refreshing state... [id=monitoring/tempo] kubernetes_service_account.prometheus: Refreshing state... [id=monitoring/prometheus] kubernetes_service_account.loki: Refreshing state... [id=monitoring/loki] module.cert_manager.helm_release.cert_manager: Refreshing state... [id=cert-manager] module.grafana.kubernetes_persistent_volume_claim.app["data"]: Refreshing state... [id=monitoring/grafana-data] kubernetes_role.loki: Refreshing state... [id=monitoring/loki] module.grafana.kubernetes_service.app[0]: Refreshing state... [id=monitoring/grafana] module.monitoring_routing.kubectl_manifest.virtual_services["app-s42-dashboards"]: Refreshing state... [id=/apis/networking.istio.io/v1alpha3/namespaces/monitoring/virtualservices/app-s42-dashboards] module.grafana.kubernetes_deployment.app[0]: Refreshing state... [id=monitoring/grafana] module.promtail.kubernetes_config_map.app["config"]: Refreshing state... [id=monitoring/promtail-config] module.promtail.kubernetes_service.app[0]: Refreshing state... [id=monitoring/promtail] module.prometheus.kubernetes_persistent_volume_claim.app["data"]: Refreshing state... [id=monitoring/prometheus-data] module.promtail.kubernetes_daemonset.app[0]: Refreshing state... [id=monitoring/promtail] module.prometheus.kubernetes_config_map.app["config"]: Refreshing state... [id=monitoring/prometheus-config] module.prometheus.kubernetes_service.app[0]: Refreshing state... [id=monitoring/prometheus] kubernetes_cluster_role_binding.promtail: Refreshing state... [id=promtail] module.loki.kubernetes_config_map.app["config"]: Refreshing state... [id=monitoring/loki-config] module.loki.kubernetes_persistent_volume_claim.app["data"]: Refreshing state... [id=monitoring/loki-data] module.loki.kubernetes_service.app[0]: Refreshing state... [id=monitoring/loki] kubernetes_cluster_role_binding.prometheus: Refreshing state... [id=prometheus] module.tempo.kubernetes_service.app[0]: Refreshing state... [id=monitoring/tempo] module.tempo.kubernetes_config_map.app["config"]: Refreshing state... [id=monitoring/tempo-config] module.tempo.kubernetes_persistent_volume_claim.app["data"]: Refreshing state... [id=monitoring/tempo-data] kubernetes_role_binding.loki: Refreshing state... [id=monitoring/loki] module.prometheus.kubernetes_stateful_set.app[0]: Refreshing state... [id=monitoring/prometheus] module.cert_manager.kubectl_manifest.certificates["dev-s42-sandbox"]: Refreshing state... [id=/apis/cert-manager.io/v1/namespaces/istio-system/certificates/dev-s42-sandbox] module.cert_manager.kubernetes_role.cert_manager_webhook_ovh_secret_reader: Refreshing state... [id=cert-manager/cert-manager-webhook-ovh:secret-reader] module.cert_manager.kubectl_manifest.certificates["app-s42"]: Refreshing state... [id=/apis/cert-manager.io/v1/namespaces/istio-system/certificates/app-s42] module.cert_manager.kubectl_manifest.certificates["app-s42-dashboards"]: Refreshing state... [id=/apis/cert-manager.io/v1/namespaces/istio-system/certificates/app-s42-dashboards] module.cert_manager.kubectl_manifest.certificates["app-s42-next"]: Refreshing state... [id=/apis/cert-manager.io/v1/namespaces/istio-system/certificates/app-s42-next] module.cert_manager.kubectl_manifest.certificates["dev-s42-previews"]: Refreshing state... [id=/apis/cert-manager.io/v1/namespaces/istio-system/certificates/dev-s42-previews] module.loki.kubernetes_stateful_set.app[0]: Refreshing state... [id=monitoring/loki] module.tempo.kubernetes_stateful_set.app[0]: Refreshing state... [id=monitoring/tempo] module.cert_manager.kubernetes_role_binding.cert_manager_webhook_ovh_secret_reader: Refreshing state... [id=cert-manager/cert-manager-webhook-ovh:secret-reader] module.cert_manager.helm_release.cert_manager_ovh: Refreshing state... [id=cert-manager-webhook-ovh] module.cert_manager.kubectl_manifest.self_signed_issuers["selfsigned-issuer"]: Refreshing state... [id=/apis/cert-manager.io/v1/clusterissuers/selfsigned-issuer] module.cert_manager.kubectl_manifest.issuers["ovh-issuer"]: Refreshing state... [id=/apis/cert-manager.io/v1/clusterissuers/ovh-issuer] module.cert_manager.kubectl_manifest.issuers["ovh-staging-issuer"]: Refreshing state... [id=/apis/cert-manager.io/v1/clusterissuers/ovh-staging-issuer] module.secrets.kubernetes_manifest.sealed_secret["ovh-credentials"]: Refreshing state... module.secrets.kubernetes_manifest.sealed_secret["ghcr-creds"]: Refreshing state... No changes. Your infrastructure matches the configuration. Terraform has compared your real infrastructure against your configuration and found no differences, so no changes are needed. Warning: "default_secret_name" is no longer applicable for Kubernetes v1.24.0 and above with kubernetes_service_account.prometheus, on monitoring.tf line 73, in resource "kubernetes_service_account" "prometheus": 73: resource "kubernetes_service_account" "prometheus" { Starting from version 1.24.0 Kubernetes does not automatically generate a token for service accounts, in this case, "default_secret_name" will be empty (and 3 more similar warnings elsewhere) Warning: Attribute not found in schema with module.secrets.kubernetes_manifest.sealed_secret["ghcr-creds"], on ../../modules/sealed-secrets/sealed-secrets.tf line 9, in resource "kubernetes_manifest" "sealed_secret": 9: resource "kubernetes_manifest" "sealed_secret" { Unable to find schema type for attribute: metadata.clusterName (and one more similar warning elsewhere) ```
github-actions[bot] commented 1 year ago

Terraform data for apps stack

Terraform Initialization ⚙️ success

Terraform Validation 🤖 success

Show Validation ``` Success! The configuration is valid. ```

Terraform Plan 📖 success

Show Plan ``` module.s42.random_password.next_auth_secret: Refreshing state... [id=none] module.s42.random_password.postgres: Refreshing state... [id=none] module.s42.kubernetes_config_map.stud42_config: Refreshing state... [id=production/stud42-config] module.s42.random_password.meilisearch_token: Refreshing state... [id=none] module.s42.module.jwtks_service.kubernetes_service.app[0]: Refreshing state... [id=production/jwtks-service] module.webhooked.module.webhooked.kubernetes_service.app[0]: Refreshing state... [id=production/webhooked] module.webhooked.module.webhooked.kubernetes_config_map.app["config"]: Refreshing state... [id=production/webhooked-config] module.s42.module.jwtks_service.kubernetes_manifest.certificate["grpc-internal"]: Refreshing state... module.webhooked.module.secrets.kubernetes_manifest.sealed_secret["s42-webhooked-secrets"]: Refreshing state... module.s42.module.service-token.kubernetes_manifest.sealed_secret["s42-service-token"]: Refreshing state... module.s42.module.service-token.kubernetes_manifest.sealed_secret["discord-token"]: Refreshing state... module.s42.module.service-token.kubernetes_manifest.sealed_secret["sentry-dsns"]: Refreshing state... module.s42.module.service-token.kubernetes_manifest.sealed_secret["jwtks-service-certs-jwk"]: Refreshing state... module.s42.kubernetes_manifest.rabbitmq_queue_webhooks_dlq: Refreshing state... module.s42.kubernetes_manifest.rabbitmq_binding_webhooks_dlq: Refreshing state... module.s42.kubernetes_manifest.rabbitmq_policy_webhooks_dlq: Refreshing state... module.s42.kubernetes_manifest.rabbitmq_queue_webhooks_processing: Refreshing state... module.s42.module.service-token.kubernetes_manifest.sealed_secret["github-token"]: Refreshing state... module.s42.module.interface.kubernetes_service.app[0]: Refreshing state... [id=production/interface] module.s42.kubernetes_manifest.rabbitmq: Refreshing state... module.s42.module.webhooks_processor.kubernetes_deployment.app[0]: Refreshing state... [id=production/webhooks-processor] module.s42.module.jwtks_service.kubernetes_deployment.app[0]: Refreshing state... [id=production/jwtks-service] module.s42.module.crawler_campus.kubernetes_cron_job.app[0]: Refreshing state... [id=production/crawler-campus] module.s42.module.service-token.kubernetes_manifest.sealed_secret["oauth2-providers"]: Refreshing state... module.s42.module.crawler_locations.kubernetes_cron_job.app[0]: Refreshing state... [id=production/crawler-locations] module.s42.module.interface.kubernetes_deployment.app[0]: Refreshing state... [id=production/interface] module.s42.module.meilisearch.kubernetes_persistent_volume_claim.app["data"]: Refreshing state... [id=production/meilisearch-data] module.s42.module.meilisearch.kubernetes_service.app[0]: Refreshing state... [id=production/meilisearch] module.s42.module.api.kubernetes_deployment.app[0]: Refreshing state... [id=production/api] module.s42.module.api.kubernetes_service.app[0]: Refreshing state... [id=production/api] module.s42.kubernetes_secret.next_auth_secret: Refreshing state... [id=production/next-auth-secret] module.s42.module.postgres.kubernetes_config_map.app["config"]: Refreshing state... [id=production/postgres-config] module.s42.module.postgres.kubernetes_service.app[0]: Refreshing state... [id=production/postgres] module.s42.module.postgres.kubernetes_persistent_volume_claim.app["data"]: Refreshing state... [id=production/postgres-data] module.s42.module.meilisearch_clean_tasks.kubernetes_cron_job.app[0]: Refreshing state... [id=production/meilisearch-clean-tasks] module.s42.module.jwtks_service.kubernetes_horizontal_pod_autoscaler_v2.app[0]: Refreshing state... [id=production/jwtks-service] module.s42.module.istio.kubectl_manifest.virtual_services["app-s42"]: Refreshing state... [id=/apis/networking.istio.io/v1alpha3/namespaces/production/virtualservices/app-s42] module.s42.module.webhooks_processor.kubernetes_horizontal_pod_autoscaler_v2.app[0]: Refreshing state... [id=production/webhooks-processor] module.s42.module.postgres.kubernetes_secret.app["credentials"]: Refreshing state... [id=production/postgres-credentials] module.s42.module.meilisearch.kubernetes_secret.app["token"]: Refreshing state... [id=production/meilisearch-token] module.s42.module.interface.kubernetes_horizontal_pod_autoscaler_v2.app[0]: Refreshing state... [id=production/interface] module.s42.module.meilisearch.kubernetes_stateful_set.app[0]: Refreshing state... [id=production/meilisearch] module.s42.module.api.kubernetes_horizontal_pod_autoscaler_v2.app[0]: Refreshing state... [id=production/api] module.s42.module.postgres.kubernetes_stateful_set.app[0]: Refreshing state... [id=production/postgres] module.webhooked.module.webhooked.kubernetes_deployment.app[0]: Refreshing state... [id=production/webhooked] module.webhooked.module.webhooked.kubernetes_horizontal_pod_autoscaler_v2.app[0]: Refreshing state... [id=production/webhooked] module.s42.kubernetes_pod_disruption_budget.rabbitmq: Refreshing state... [id=production/rabbitmq] module.s42.kubernetes_manifest.rabbitmq_exchange_webhooks: Refreshing state... Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols: + create ~ update in-place Terraform will perform the following actions: # module.s42.kubernetes_config_map.stud42_config will be updated in-place ~ resource "kubernetes_config_map" "stud42_config" { ~ data = { ~ "stud42.yaml" = <<-EOT # API relatives configurations - api: {} + api: + s3: + users: + bucket: s42-users + region: gra + endpoint: https://s3.gra.io.cloud.ovh.net + # Interface relatives configurations interface: {} # jwtks service relatives configurations jwtks: # Endpoint of the public JWKSet can be used to validate # a JWT Token endpoints: sets: https://s42.app/.well-known/jwks sign: jwtks-service.production.svc.cluster.local:5000 # Certs used to sign and validate the JWT # Also called : The JWK jwk: certPrivateKeyFile: /etc/certs/jwk/private.key certPublicKeyFile: /etc/certs/jwk/public.pem # Certs used to secure the GRPC Endpoint with SSL/TLS grpc: insecure: false certRootCaFile: /etc/certs/grpc/ca.crt certPrivateKeyFile: /etc/certs/grpc/tls.key certPublicKeyFile: /etc/certs/grpc/tls.crt discord: guildID: "248936708379246593" EOT } id = "production/stud42-config" # (2 unchanged attributes hidden) # (1 unchanged block hidden) } # module.s42.kubernetes_manifest.rabbitmq_exchange_webhooks will be updated in-place ~ resource "kubernetes_manifest" "rabbitmq_exchange_webhooks" { ~ object = { ~ spec = { ~ autoDelete = null -> false name = "webhooks" # (5 unchanged elements hidden) } # (3 unchanged elements hidden) } # (1 unchanged attribute hidden) } # module.s42.kubernetes_manifest.rabbitmq_queue_webhooks_dlq will be updated in-place ~ resource "kubernetes_manifest" "rabbitmq_queue_webhooks_dlq" { ~ object = { ~ spec = { ~ autoDelete = null -> false name = "webhooks.dlq" # (5 unchanged elements hidden) } # (3 unchanged elements hidden) } # (1 unchanged attribute hidden) } # module.s42.kubernetes_manifest.rabbitmq_queue_webhooks_processing will be updated in-place ~ resource "kubernetes_manifest" "rabbitmq_queue_webhooks_processing" { ~ object = { ~ spec = { ~ autoDelete = null -> false name = "webhooks.processing" # (5 unchanged elements hidden) } # (3 unchanged elements hidden) } # (1 unchanged attribute hidden) } # module.s42.kubernetes_pod_disruption_budget.rabbitmq will be created + resource "kubernetes_pod_disruption_budget" "rabbitmq" { + id = (known after apply) + metadata { + generation = (known after apply) + name = "rabbitmq" + namespace = "production" + resource_version = (known after apply) + uid = (known after apply) } + spec { + max_unavailable = "0" + selector { + match_labels = { + "app.kubernetes.io/name" = "rabbitmq" } } } } # module.s42.random_password.dragonfly will be created + resource "random_password" "dragonfly" { + bcrypt_hash = (sensitive value) + id = (known after apply) + length = 64 + lower = true + min_lower = 0 + min_numeric = 0 + min_special = 0 + min_upper = 0 + number = true + numeric = true + result = (sensitive value) + special = true + upper = true } # module.s42.module.api.kubernetes_deployment.app[0] will be updated in-place ~ resource "kubernetes_deployment" "app" { id = "production/api" # (1 unchanged attribute hidden) ~ metadata { ~ labels = { ~ "app.kubernetes.io/version" = "v0.23" -> "latest" ~ "version" = "v0.23" -> "latest" # (5 unchanged elements hidden) } name = "api" # (5 unchanged attributes hidden) } ~ spec { ~ replicas = "2" -> "1" # (4 unchanged attributes hidden) ~ template { ~ metadata { ~ labels = { ~ "version" = "v0.23" -> "latest" # (4 unchanged elements hidden) } # (2 unchanged attributes hidden) } ~ spec { # (11 unchanged attributes hidden) ~ container { ~ image = "ghcr.io/42atomys/stud42:v0.23" -> "ghcr.io/42atomys/stud42:latest" name = "api" # (8 unchanged attributes hidden) ~ env { ~ name = "DATABASE_PASSWORD" -> "AWS_ACCESS_KEY_ID" ~ value_from { ~ secret_key_ref { ~ key = "POSTGRES_PASSWORD_ENCODED" -> "AWS_ACCESS_KEY_ID" ~ name = "postgres-credentials" -> "ovh-s3-credentials" # (1 unchanged attribute hidden) } } } ~ env { ~ name = "DISCORD_TOKEN" -> "AWS_SECRET_ACCESS_KEY" ~ value_from { ~ secret_key_ref { ~ key = "DISCORD_TOKEN" -> "AWS_SECRET_ACCESS_KEY" ~ name = "discord-token" -> "ovh-s3-credentials" # (1 unchanged attribute hidden) } } } ~ env { ~ name = "GITHUB_TOKEN" -> "DATABASE_PASSWORD" ~ value_from { ~ secret_key_ref { ~ key = "GITHUB_TOKEN" -> "POSTGRES_PASSWORD_ENCODED" ~ name = "github-token" -> "postgres-credentials" # (1 unchanged attribute hidden) } } } ~ env { ~ name = "S42_SERVICE_TOKEN" -> "DFLY_PASSWORD" ~ value_from { ~ secret_key_ref { ~ key = "TOKEN" -> "DFLY_PASSWORD" ~ name = "s42-service-token" -> "dragonfly-credentials" # (1 unchanged attribute hidden) } } } ~ env { ~ name = "SEARCHENGINE_MEILISEARCH_TOKEN" -> "DISCORD_TOKEN" ~ value_from { ~ secret_key_ref { ~ key = "MEILI_MASTER_KEY" -> "DISCORD_TOKEN" ~ name = "meilisearch-token" -> "discord-token" # (1 unchanged attribute hidden) } } } ~ env { ~ name = "SENTRY_DSN" -> "GITHUB_TOKEN" ~ value_from { ~ secret_key_ref { ~ key = "API_DSN" -> "GITHUB_TOKEN" ~ name = "sentry-dsns" -> "github-token" # (1 unchanged attribute hidden) } } } ~ env { ~ name = "CORS_ORIGIN" -> "S42_SERVICE_TOKEN" - value = "https://s42.app" -> null + value_from { + secret_key_ref { + key = "TOKEN" + name = "s42-service-token" } } } ~ env { ~ name = "DATABASE_HOST" -> "SEARCHENGINE_MEILISEARCH_TOKEN" - value = "postgres.production.svc.cluster.local" -> null + value_from { + secret_key_ref { + key = "MEILI_MASTER_KEY" + name = "meilisearch-token" } } } ~ env { ~ name = "DATABASE_NAME" -> "SENTRY_DSN" - value = "s42" -> null + value_from { + secret_key_ref { + key = "API_DSN" + name = "sentry-dsns" } } } ~ env { ~ name = "DATABASE_URL" -> "CORS_ORIGIN" ~ value = "postgresql://postgres:$(DATABASE_PASSWORD)@$(DATABASE_HOST):5432/$(DATABASE_NAME)?sslmode=disable" -> "https://s42.app" } ~ env { ~ name = "GO_ENV" -> "DATABASE_HOST" ~ value = "production" -> "postgres.production.svc.cluster.local" } ~ env { ~ name = "SEARCHENGINE_MEILISEARCH_HOST" -> "DATABASE_NAME" ~ value = "http://meilisearch.production.svc.cluster.local:7700" -> "s42" } + env { + name = "DATABASE_URL" + value = "postgresql://postgres:$(DATABASE_PASSWORD)@$(DATABASE_HOST):5432/$(DATABASE_NAME)?sslmode=disable" } + env { + name = "GO_ENV" + value = "production" } + env { + name = "KEYVALUE_STORE_HOST" + value = "dragonfly.production.svc.cluster.local" } + env { + name = "KEYVALUE_STORE_PORT" + value = "6379" } + env { + name = "KEYVALUE_STORE_URL" + value = "redis://:$(DFLY_PASSWORD)@$(KEYVALUE_STORE_HOST):$(KEYVALUE_STORE_PORT)" } + env { + name = "SEARCHENGINE_MEILISEARCH_HOST" + value = "http://meilisearch.production.svc.cluster.local:7700" } # (4 unchanged blocks hidden) } # (3 unchanged blocks hidden) } } # (2 unchanged blocks hidden) } } # module.s42.module.api.kubernetes_horizontal_pod_autoscaler_v2.app[0] will be updated in-place ~ resource "kubernetes_horizontal_pod_autoscaler_v2" "app" { id = "production/api" ~ metadata { ~ labels = { ~ "app.kubernetes.io/version" = "v0.23" -> "latest" ~ "version" = "v0.23" -> "latest" # (5 unchanged elements hidden) } name = "api" # (5 unchanged attributes hidden) } # (1 unchanged block hidden) } # module.s42.module.api.kubernetes_service.app[0] will be updated in-place ~ resource "kubernetes_service" "app" { id = "production/api" # (2 unchanged attributes hidden) ~ metadata { ~ labels = { ~ "app.kubernetes.io/version" = "v0.23" -> "latest" ~ "version" = "v0.23" -> "latest" # (5 unchanged elements hidden) } name = "api" # (5 unchanged attributes hidden) } # (1 unchanged block hidden) } # module.s42.module.crawler_campus.kubernetes_cron_job.app[0] will be created + resource "kubernetes_cron_job" "app" { + id = (known after apply) + metadata { + generation = (known after apply) + labels = { + "app" = "crawler-campus" + "app.kubernetes.io/created-by" = "github-actions" + "app.kubernetes.io/managed-by" = "terraform" + "app.kubernetes.io/part-of" = "crawler-campus" + "app.kubernetes.io/version" = "latest" + "kubernetes.io/name" = "crawler-campus" + "version" = "latest" } + name = "crawler-campus" + namespace = "production" + resource_version = (known after apply) + uid = (known after apply) } + spec { + concurrency_policy = "Forbid" + failed_jobs_history_limit = 3 + schedule = "10 03 * * mon" + starting_deadline_seconds = 0 + successful_jobs_history_limit = 1 + suspend = false + job_template { + metadata { + generation = (known after apply) + labels = { + "app" = "crawler-campus" + "app.kubernetes.io/created-by" = "github-actions" + "app.kubernetes.io/managed-by" = "terraform" + "app.kubernetes.io/part-of" = "crawler-campus" + "app.kubernetes.io/version" = "latest" + "kubernetes.io/name" = "crawler-campus" + "version" = "latest" } + name = (known after apply) + resource_version = (known after apply) + uid = (known after apply) } + spec { + active_deadline_seconds = 600 + backoff_limit = 0 + completion_mode = "NonIndexed" + completions = 1 + parallelism = 1 + ttl_seconds_after_finished = "300" + selector { + match_labels = (known after apply) + match_expressions { + key = (known after apply) + operator = (known after apply) + values = (known after apply) } } + template { + metadata { + annotations = { + "prometheus.io/path" = "/metrics" + "prometheus.io/port" = "8080" + "prometheus.io/scrape" = "false" } + generation = (known after apply) + labels = { + "app" = "crawler-campus" + "app.kubernetes.io/created-by" = "github-actions" + "app.kubernetes.io/managed-by" = "terraform" + "kubernetes.io/name" = "crawler-campus" + "sidecar.istio.io/inject" = "false" + "version" = "latest" } + name = (known after apply) + resource_version = (known after apply) + uid = (known after apply) } + spec { + automount_service_account_token = true + dns_policy = "ClusterFirst" + enable_service_links = true + host_ipc = false + host_network = false + host_pid = false + hostname = (known after apply) + node_name = (known after apply) + node_selector = { + "nodepool" = "small" } + restart_policy = "Never" + service_account_name = (known after apply) + share_process_namespace = false + termination_grace_period_seconds = 30 + container { + args = [ + "--config", + "/config/stud42.yaml", + "jobs", + "crawler", + "campus", ] + command = [ + "stud42cli", ] + image = "ghcr.io/42atomys/stud42:latest" + image_pull_policy = "IfNotPresent" + name = "crawler-campus" + stdin = false + stdin_once = false + termination_message_path = "/dev/termination-log" + termination_message_policy = (known after apply) + tty = false + env { + name = "DATABASE_PASSWORD" + value_from { + secret_key_ref { + key = "POSTGRES_PASSWORD_ENCODED" + name = "postgres-credentials" } } } + env { + name = "FORTY_TWO_ID" + value_from { + secret_key_ref { + key = "FORTY_TWO_ID" + name = "oauth2-providers" } } } + env { + name = "FORTY_TWO_SECRET" + value_from { + secret_key_ref { + key = "FORTY_TWO_SECRET" + name = "oauth2-providers" } } } + env { + name = "SENTRY_DSN" + value_from { + secret_key_ref { + key = "API_DSN" + name = "sentry-dsns" } } } + env { + name = "DATABASE_HOST" + value = "postgres.production.svc.cluster.local" } + env { + name = "DATABASE_NAME" + value = "s42" } + env { + name = "DATABASE_URL" + value = "postgresql://postgres:$(DATABASE_PASSWORD)@$(DATABASE_HOST):5432/$(DATABASE_NAME)?sslmode=disable" } + env { + name = "DEBUG" + value = "true" } + env { + name = "GO_ENV" + value = "production" } + resources { + limits = { + "memory" = "128Mi" } + requests = { + "cpu" = "5m" + "memory" = "42Mi" } } + security_context { + allow_privilege_escalation = false + privileged = false + read_only_root_filesystem = false + run_as_group = "1000" + run_as_non_root = true + run_as_user = "1000" } + volume_mount { + mount_path = "/config" + mount_propagation = "None" + name = "configuration" + read_only = true } } + image_pull_secrets { + name = "ghcr-creds" } + readiness_gate { + condition_type = (known after apply) } + security_context { + fs_group = "1000" + run_as_group = "1000" + run_as_non_root = true + run_as_user = "1000" } + volume { + name = "configuration" + config_map { + default_mode = "0644" + name = "stud42-config" } } } } } } } } # module.s42.module.crawler_locations.kubernetes_cron_job.app[0] will be created + resource "kubernetes_cron_job" "app" { + id = (known after apply) + metadata { + generation = (known after apply) + labels = { + "app" = "crawler-locations" + "app.kubernetes.io/created-by" = "github-actions" + "app.kubernetes.io/managed-by" = "terraform" + "app.kubernetes.io/part-of" = "crawler-locations" + "app.kubernetes.io/version" = "latest" + "kubernetes.io/name" = "crawler-locations" + "version" = "latest" } + name = "crawler-locations" + namespace = "production" + resource_version = (known after apply) + uid = (known after apply) } + spec { + concurrency_policy = "Forbid" + failed_jobs_history_limit = 3 + schedule = "0 * * * *" + starting_deadline_seconds = 0 + successful_jobs_history_limit = 1 + suspend = false + job_template { + metadata { + generation = (known after apply) + labels = { + "app" = "crawler-locations" + "app.kubernetes.io/created-by" = "github-actions" + "app.kubernetes.io/managed-by" = "terraform" + "app.kubernetes.io/part-of" = "crawler-locations" + "app.kubernetes.io/version" = "latest" + "kubernetes.io/name" = "crawler-locations" + "version" = "latest" } + name = (known after apply) + resource_version = (known after apply) + uid = (known after apply) } + spec { + active_deadline_seconds = 600 + backoff_limit = 0 + completion_mode = "NonIndexed" + completions = 1 + parallelism = 1 + ttl_seconds_after_finished = "300" + selector { + match_labels = (known after apply) + match_expressions { + key = (known after apply) + operator = (known after apply) + values = (known after apply) } } + template { + metadata { + annotations = { + "prometheus.io/path" = "/metrics" + "prometheus.io/port" = "8080" + "prometheus.io/scrape" = "false" } + generation = (known after apply) + labels = { + "app" = "crawler-locations" + "app.kubernetes.io/created-by" = "github-actions" + "app.kubernetes.io/managed-by" = "terraform" + "kubernetes.io/name" = "crawler-locations" + "sidecar.istio.io/inject" = "false" + "version" = "latest" } + name = (known after apply) + resource_version = (known after apply) + uid = (known after apply) } + spec { + automount_service_account_token = true + dns_policy = "ClusterFirst" + enable_service_links = true + host_ipc = false + host_network = false + host_pid = false + hostname = (known after apply) + node_name = (known after apply) + node_selector = { + "nodepool" = "small" } + restart_policy = "Never" + service_account_name = (known after apply) + share_process_namespace = false + termination_grace_period_seconds = 30 + container { + args = [ + "--config", + "/config/stud42.yaml", + "jobs", + "crawler", + "locations", ] + command = [ + "stud42cli", ] + image = "ghcr.io/42atomys/stud42:latest" + image_pull_policy = "IfNotPresent" + name = "crawler-locations" + stdin = false + stdin_once = false + termination_message_path = "/dev/termination-log" + termination_message_policy = (known after apply) + tty = false + env { + name = "DATABASE_PASSWORD" + value_from { + secret_key_ref { + key = "POSTGRES_PASSWORD_ENCODED" + name = "postgres-credentials" } } } + env { + name = "FORTY_TWO_ID" + value_from { + secret_key_ref { + key = "FORTY_TWO_ID" + name = "oauth2-providers" } } } + env { + name = "FORTY_TWO_SECRET" + value_from { + secret_key_ref { + key = "FORTY_TWO_SECRET" + name = "oauth2-providers" } } } + env { + name = "SEARCHENGINE_MEILISEARCH_TOKEN" + value_from { + secret_key_ref { + key = "MEILI_MASTER_KEY" + name = "meilisearch-token" } } } + env { + name = "SENTRY_DSN" + value_from { + secret_key_ref { + key = "API_DSN" + name = "sentry-dsns" } } } + env { + name = "DATABASE_HOST" + value = "postgres.production.svc.cluster.local" } + env { + name = "DATABASE_NAME" + value = "s42" } + env { + name = "DATABASE_URL" + value = "postgresql://postgres:$(DATABASE_PASSWORD)@$(DATABASE_HOST):5432/$(DATABASE_NAME)?sslmode=disable" } + env { + name = "DEBUG" + value = "true" } + env { + name = "GO_ENV" + value = "production" } + env { + name = "SEARCHENGINE_MEILISEARCH_HOST" + value = "http://meilisearch.production.svc.cluster.local:7700" } + resources { + limits = { + "memory" = "128Mi" } + requests = { + "cpu" = "5m" + "memory" = "42Mi" } } + security_context { + allow_privilege_escalation = false + privileged = false + read_only_root_filesystem = false + run_as_group = "1000" + run_as_non_root = true + run_as_user = "1000" } + volume_mount { + mount_path = "/config" + mount_propagation = "None" + name = "configuration" + read_only = true } } + image_pull_secrets { + name = "ghcr-creds" } + readiness_gate { + condition_type = (known after apply) } + security_context { + fs_group = "1000" + run_as_group = "1000" + run_as_non_root = true + run_as_user = "1000" } + volume { + name = "configuration" + config_map { + default_mode = "0644" + name = "stud42-config" } } } } } } } } # module.s42.module.dragonfly.kubernetes_persistent_volume_claim.app["data"] will be created + resource "kubernetes_persistent_volume_claim" "app" { + id = (known after apply) + wait_until_bound = true + metadata { + generation = (known after apply) + labels = { + "app" = "dragonfly" + "app.kubernetes.io/created-by" = "github-actions" + "app.kubernetes.io/managed-by" = "terraform" + "app.kubernetes.io/part-of" = "dragonfly" + "app.kubernetes.io/version" = "v1.3.0" + "kubernetes.io/name" = "dragonfly" + "version" = "v1.3.0" } + name = "dragonfly-data" + namespace = "production" + resource_version = (known after apply) + uid = (known after apply) } + spec { + access_modes = [ + "ReadWriteMany", ] + storage_class_name = "csi-cinder-high-speed" + volume_name = (known after apply) + resources { + requests = { + "storage" = "2Gi" } } } } # module.s42.module.dragonfly.kubernetes_secret.app["credentials"] will be created + resource "kubernetes_secret" "app" { + data = (sensitive value) + id = (known after apply) + immutable = false + type = "Opaque" + wait_for_service_account_token = true + metadata { + generation = (known after apply) + labels = { + "app" = "dragonfly" + "app.kubernetes.io/created-by" = "github-actions" + "app.kubernetes.io/managed-by" = "terraform" + "app.kubernetes.io/part-of" = "dragonfly" + "app.kubernetes.io/version" = "v1.3.0" + "kubernetes.io/name" = "dragonfly" + "version" = "v1.3.0" } + name = "dragonfly-credentials" + namespace = "production" + resource_version = (known after apply) + uid = (known after apply) } } # module.s42.module.dragonfly.kubernetes_service.app[0] will be created + resource "kubernetes_service" "app" { + id = (known after apply) + status = (known after apply) + wait_for_load_balancer = true + metadata { + generation = (known after apply) + labels = { + "app" = "dragonfly" + "app.kubernetes.io/created-by" = "github-actions" + "app.kubernetes.io/managed-by" = "terraform" + "app.kubernetes.io/part-of" = "dragonfly" + "app.kubernetes.io/version" = "v1.3.0" + "kubernetes.io/name" = "dragonfly" + "version" = "v1.3.0" } + name = "dragonfly" + namespace = "production" + resource_version = (known after apply) + uid = (known after apply) } + spec { + allocate_load_balancer_node_ports = true + cluster_ip = (known after apply) + cluster_ips = (known after apply) + external_traffic_policy = (known after apply) + health_check_node_port = (known after apply) + internal_traffic_policy = (known after apply) + ip_families = (known after apply) + ip_family_policy = (known after apply) + publish_not_ready_addresses = false + selector = { + "kubernetes.io/name" = "dragonfly" } + session_affinity = "None" + type = "ClusterIP" + port { + name = "tcp-dragonfly" + node_port = (known after apply) + port = 6379 + protocol = "TCP" + target_port = "6379" } + session_affinity_config { + client_ip { + timeout_seconds = (known after apply) } } } } # module.s42.module.dragonfly.kubernetes_stateful_set.app[0] will be created + resource "kubernetes_stateful_set" "app" { + id = (known after apply) + wait_for_rollout = true + metadata { + generation = (known after apply) + labels = { + "app" = "dragonfly" + "app.kubernetes.io/created-by" = "github-actions" + "app.kubernetes.io/managed-by" = "terraform" + "app.kubernetes.io/part-of" = "dragonfly" + "app.kubernetes.io/version" = "v1.3.0" + "kubernetes.io/name" = "dragonfly" + "version" = "v1.3.0" } + name = "dragonfly" + namespace = "production" + resource_version = (known after apply) + uid = (known after apply) } + spec { + pod_management_policy = "OrderedReady" + replicas = "1" + revision_history_limit = 1 + service_name = "dragonfly" + selector { + match_labels = { + "kubernetes.io/name" = "dragonfly" } } + template { + metadata { + annotations = { + "prometheus.io/path" = "/metrics" + "prometheus.io/port" = "6379" + "prometheus.io/scrape" = "true" } + generation = (known after apply) + labels = { + "app" = "dragonfly" + "app.kubernetes.io/created-by" = "github-actions" + "app.kubernetes.io/managed-by" = "terraform" + "kubernetes.io/name" = "dragonfly" + "version" = "v1.3.0" } + name = (known after apply) + resource_version = (known after apply) + uid = (known after apply) } + spec { + automount_service_account_token = true + dns_policy = "ClusterFirst" + enable_service_links = true + host_ipc = false + host_network = false + host_pid = false + hostname = (known after apply) + node_name = (known after apply) + node_selector = { + "nodepool" = "medium" } + restart_policy = "Always" + service_account_name = (known after apply) + share_process_namespace = false + termination_grace_period_seconds = 30 + container { + args = [] + command = [] + image = "docker.dragonflydb.io/dragonflydb/dragonfly:v1.3.0" + image_pull_policy = "IfNotPresent" + name = "dragonfly" + stdin = false + stdin_once = false + termination_message_path = "/dev/termination-log" + termination_message_policy = (known after apply) + tty = false + env { + name = "DFLY_PASSWORD" + value_from { + secret_key_ref { + key = "DFLY_PASSWORD" + name = "dragonfly-credentials" } } } + liveness_probe { + failure_threshold = 3 + initial_delay_seconds = 10 + period_seconds = 10 + success_threshold = 1 + timeout_seconds = 5 + http_get { + path = "/" + port = "dragonfly" + scheme = "HTTP" } } + port { + container_port = 6379 + name = "tcp-dragonfly" + protocol = "TCP" } + readiness_probe { + failure_threshold = 3 + initial_delay_seconds = 10 + period_seconds = 10 + success_threshold = 1 + timeout_seconds = 5 + http_get { + path = "/" + port = "dragonfly" + scheme = "HTTP" } } + resources { + limits = { + "memory" = "256Mi" } + requests = { + "cpu" = "100m" + "memory" = "128Mi" } } + security_context { + allow_privilege_escalation = false + privileged = false + read_only_root_filesystem = false + run_as_group = "1000" + run_as_non_root = true + run_as_user = "1000" } + volume_mount { + mount_path = "/data" + mount_propagation = "None" + name = "data" + read_only = false } } + image_pull_secrets { + name = "ghcr-creds" } + init_container { + command = [ + "chown", + "-R", + "1000:1000", + "/data", ] + image = "busybox" + image_pull_policy = (known after apply) + name = "fix-permissions-0" + stdin = false + stdin_once = false + termination_message_path = "/dev/termination-log" + termination_message_policy = (known after apply) + tty = false + resources { + limits = (known after apply) + requests = (known after apply) } + security_context { + allow_privilege_escalation = true + privileged = false + read_only_root_filesystem = false + run_as_group = "0" + run_as_non_root = false + run_as_user = "0" } + volume_mount { + mount_path = "/data" + mount_propagation = "None" + name = "data" + read_only = false } } + readiness_gate { + condition_type = (known after apply) } + security_context { + fs_group = "1000" + run_as_group = "1000" + run_as_non_root = true + run_as_user = "1000" } + volume { + name = "data" + persistent_volume_claim { + claim_name = "dragonfly-data" + read_only = false } } } } + update_strategy { + type = "RollingUpdate" + rolling_update { + partition = 0 } } } } # module.s42.module.interface.kubernetes_deployment.app[0] will be updated in-place ~ resource "kubernetes_deployment" "app" { id = "production/interface" # (1 unchanged attribute hidden) ~ metadata { ~ labels = { ~ "app.kubernetes.io/version" = "v0.23" -> "latest" ~ "version" = "v0.23" -> "latest" # (5 unchanged elements hidden) } name = "interface" # (5 unchanged attributes hidden) } ~ spec { ~ replicas = "2" -> "1" # (4 unchanged attributes hidden) ~ template { ~ metadata { ~ labels = { ~ "version" = "v0.23" -> "latest" # (4 unchanged elements hidden) } # (2 unchanged attributes hidden) } ~ spec { # (11 unchanged attributes hidden) ~ container { ~ image = "ghcr.io/42atomys/stud42:v0.23" -> "ghcr.io/42atomys/stud42:latest" name = "interface" # (8 unchanged attributes hidden) # (19 unchanged blocks hidden) } # (4 unchanged blocks hidden) } } # (2 unchanged blocks hidden) } } # module.s42.module.interface.kubernetes_horizontal_pod_autoscaler_v2.app[0] will be updated in-place ~ resource "kubernetes_horizontal_pod_autoscaler_v2" "app" { id = "production/interface" ~ metadata { ~ labels = { ~ "app.kubernetes.io/version" = "v0.23" -> "latest" ~ "version" = "v0.23" -> "latest" # (5 unchanged elements hidden) } name = "interface" # (5 unchanged attributes hidden) } # (1 unchanged block hidden) } # module.s42.module.interface.kubernetes_service.app[0] will be updated in-place ~ resource "kubernetes_service" "app" { id = "production/interface" # (2 unchanged attributes hidden) ~ metadata { ~ labels = { ~ "app.kubernetes.io/version" = "v0.23" -> "latest" ~ "version" = "v0.23" -> "latest" # (5 unchanged elements hidden) } name = "interface" # (5 unchanged attributes hidden) } # (1 unchanged block hidden) } # module.s42.module.jwtks_service.kubernetes_deployment.app[0] will be updated in-place ~ resource "kubernetes_deployment" "app" { id = "production/jwtks-service" # (1 unchanged attribute hidden) ~ metadata { ~ labels = { ~ "app.kubernetes.io/version" = "v0.23" -> "latest" ~ "version" = "v0.23" -> "latest" # (5 unchanged elements hidden) } name = "jwtks-service" # (5 unchanged attributes hidden) } ~ spec { ~ replicas = "2" -> "1" # (4 unchanged attributes hidden) ~ template { ~ metadata { ~ labels = { ~ "version" = "v0.23" -> "latest" # (4 unchanged elements hidden) } # (2 unchanged attributes hidden) } ~ spec { # (11 unchanged attributes hidden) ~ container { ~ image = "ghcr.io/42atomys/stud42:v0.23" -> "ghcr.io/42atomys/stud42:latest" name = "jwtks-service" # (8 unchanged attributes hidden) # (10 unchanged blocks hidden) } # (5 unchanged blocks hidden) } } # (2 unchanged blocks hidden) } } # module.s42.module.jwtks_service.kubernetes_horizontal_pod_autoscaler_v2.app[0] will be updated in-place ~ resource "kubernetes_horizontal_pod_autoscaler_v2" "app" { id = "production/jwtks-service" ~ metadata { ~ labels = { ~ "app.kubernetes.io/version" = "v0.23" -> "latest" ~ "version" = "v0.23" -> "latest" # (5 unchanged elements hidden) } name = "jwtks-service" # (5 unchanged attributes hidden) } # (1 unchanged block hidden) } # module.s42.module.jwtks_service.kubernetes_manifest.certificate["grpc-internal"] will be updated in-place ~ resource "kubernetes_manifest" "certificate" { ~ manifest = { ~ metadata = { ~ labels = { ~ "app.kubernetes.io/version" = "v0.23" -> "latest" ~ version = "v0.23" -> "latest" # (5 unchanged elements hidden) } name = "jwtks-service-grpc-internal" # (1 unchanged element hidden) } # (3 unchanged elements hidden) } ~ object = { ~ metadata = { ~ labels = { - "app" = "jwtks-service" - "app.kubernetes.io/created-by" = "github-actions" - "app.kubernetes.io/managed-by" = "terraform" - "app.kubernetes.io/part-of" = "jwtks-service" - "app.kubernetes.io/version" = "v0.23" - "kubernetes.io/name" = "jwtks-service" - "version" = "v0.23" } -> (known after apply) name = "jwtks-service-grpc-internal" # (13 unchanged elements hidden) } # (3 unchanged elements hidden) } } # module.s42.module.jwtks_service.kubernetes_service.app[0] will be updated in-place ~ resource "kubernetes_service" "app" { id = "production/jwtks-service" # (2 unchanged attributes hidden) ~ metadata { ~ labels = { ~ "app.kubernetes.io/version" = "v0.23" -> "latest" ~ "version" = "v0.23" -> "latest" # (5 unchanged elements hidden) } name = "jwtks-service" # (5 unchanged attributes hidden) } # (1 unchanged block hidden) } # module.s42.module.meilisearch_clean_tasks.kubernetes_cron_job.app[0] will be created + resource "kubernetes_cron_job" "app" { + id = (known after apply) + metadata { + generation = (known after apply) + labels = { + "app" = "meilisearch-clean-tasks" + "app.kubernetes.io/created-by" = "github-actions" + "app.kubernetes.io/managed-by" = "terraform" + "app.kubernetes.io/part-of" = "meilisearch-clean-tasks" + "app.kubernetes.io/version" = "v0.30" + "kubernetes.io/name" = "meilisearch-clean-tasks" + "version" = "v0.30" } + name = "meilisearch-clean-tasks" + namespace = "production" + resource_version = (known after apply) + uid = (known after apply) } + spec { + concurrency_policy = "Forbid" + failed_jobs_history_limit = 2 + schedule = "0 0 * * *" + starting_deadline_seconds = 0 + successful_jobs_history_limit = 1 + suspend = false + job_template { + metadata { + generation = (known after apply) + labels = { + "app" = "meilisearch-clean-tasks" + "app.kubernetes.io/created-by" = "github-actions" + "app.kubernetes.io/managed-by" = "terraform" + "app.kubernetes.io/part-of" = "meilisearch-clean-tasks" + "app.kubernetes.io/version" = "v0.30" + "kubernetes.io/name" = "meilisearch-clean-tasks" + "version" = "v0.30" } + name = (known after apply) + resource_version = (known after apply) + uid = (known after apply) } + spec { + active_deadline_seconds = 600 + backoff_limit = 0 + completion_mode = "NonIndexed" + completions = 1 + parallelism = 1 + ttl_seconds_after_finished = "0" + selector { + match_labels = (known after apply) + match_expressions { + key = (known after apply) + operator = (known after apply) + values = (known after apply) } } + template { + metadata { + annotations = { + "prometheus.io/path" = "/metrics" + "prometheus.io/port" = "8080" + "prometheus.io/scrape" = "false" } + generation = (known after apply) + labels = { + "app" = "meilisearch-clean-tasks" + "app.kubernetes.io/created-by" = "github-actions" + "app.kubernetes.io/managed-by" = "terraform" + "kubernetes.io/name" = "meilisearch-clean-tasks" + "version" = "v0.30" } + name = (known after apply) + resource_version = (known after apply) + uid = (known after apply) } + spec { + automount_service_account_token = true + dns_policy = "ClusterFirst" + enable_service_links = true + host_ipc = false + host_network = false + host_pid = false + hostname = (known after apply) + node_name = (known after apply) + node_selector = { + "nodepool" = "small" } + restart_policy = "OnFailure" + service_account_name = (known after apply) + share_process_namespace = false + termination_grace_period_seconds = 30 + container { + args = [ + "--fail", + "-X", + "DELETE", + "http://meilisearch:7700/tasks?statuses=failed,canceled,succeeded", + "-H", + "Authorization: Bearer $(MEILI_MASTER_KEY)", + "-H", + "Content-Type: application/json", ] + command = [] + image = "curlimages/curl:7.86.0" + image_pull_policy = "IfNotPresent" + name = "meilisearch-clean-tasks" + stdin = false + stdin_once = false + termination_message_path = "/dev/termination-log" + termination_message_policy = (known after apply) + tty = false + env { + name = "MEILI_MASTER_KEY" + value_from { + secret_key_ref { + key = "MEILI_MASTER_KEY" + name = "meilisearch-token" } } } + resources { + limits = { + "memory" = "128Mi" } + requests = { + "cpu" = "100m" + "memory" = "128Mi" } } + security_context { + allow_privilege_escalation = false + privileged = false + read_only_root_filesystem = false + run_as_group = "1000" + run_as_non_root = true + run_as_user = "1000" } } + image_pull_secrets { + name = "ghcr-creds" } + readiness_gate { + condition_type = (known after apply) } + security_context { + fs_group = "1000" + run_as_group = "1000" + run_as_non_root = true + run_as_user = "1000" } + volume { + name = (known after apply) + aws_elastic_block_store { + fs_type = (known after apply) + partition = (known after apply) + read_only = (known after apply) + volume_id = (known after apply) } + azure_disk { + caching_mode = (known after apply) + data_disk_uri = (known after apply) + disk_name = (known after apply) + fs_type = (known after apply) + kind = (known after apply) + read_only = (known after apply) } + azure_file { + read_only = (known after apply) + secret_name = (known after apply) + secret_namespace = (known after apply) + share_name = (known after apply) } + ceph_fs { + monitors = (known after apply) + path = (known after apply) + read_only = (known after apply) + secret_file = (known after apply) + user = (known after apply) + secret_ref { + name = (known after apply) + namespace = (known after apply) } } + cinder { + fs_type = (known after apply) + read_only = (known after apply) + volume_id = (known after apply) } + config_map { + default_mode = (known after apply) + name = (known after apply) + optional = (known after apply) + items { + key = (known after apply) + mode = (known after apply) + path = (known after apply) } } + csi { + driver = (known after apply) + fs_type = (known after apply) + read_only = (known after apply) + volume_attributes = (known after apply) + node_publish_secret_ref { + name = (known after apply) } } + downward_api { + default_mode = (known after apply) + items { + mode = (known after apply) + path = (known after apply) + field_ref { + api_version = (known after apply) + field_path = (known after apply) } + resource_field_ref { + container_name = (known after apply) + divisor = (known after apply) + resource = (known after apply) } } } + empty_dir { + medium = (known after apply) + size_limit = (known after apply) } + fc { + fs_type = (known after apply) + lun = (known after apply) + read_only = (known after apply) + target_ww_ns = (known after apply) } + flex_volume { + driver = (known after apply) + fs_type = (known after apply) + options = (known after apply) + read_only = (known after apply) + secret_ref { + name = (known after apply) + namespace = (known after apply) } } + flocker { + dataset_name = (known after apply) + dataset_uuid = (known after apply) } + gce_persistent_disk { + fs_type = (known after apply) + partition = (known after apply) + pd_name = (known after apply) + read_only = (known after apply) } + git_repo { + directory = (known after apply) + repository = (known after apply) + revision = (known after apply) } + glusterfs { + endpoints_name = (known after apply) + path = (known after apply) + read_only = (known after apply) } + host_path { + path = (known after apply) + type = (known after apply) } + iscsi { + fs_type = (known after apply) + iqn = (known after apply) + iscsi_interface = (known after apply) + lun = (known after apply) + read_only = (known after apply) + target_portal = (known after apply) } + local { + path = (known after apply) } + nfs { + path = (known after apply) + read_only = (known after apply) + server = (known after apply) } + persistent_volume_claim { + claim_name = (known after apply) + read_only = (known after apply) } + photon_persistent_disk { + fs_type = (known after apply) + pd_id = (known after apply) } + projected { + default_mode = (known after apply) + sources { + config_map { + name = (known after apply) + optional = (known after apply) + items { + key = (known after apply) + mode = (known after apply) + path = (known after apply) } } + downward_api { + items { + mode = (known after apply) + path = (known after apply) + field_ref { + api_version = (known after apply) + field_path = (known after apply) } + resource_field_ref { + container_name = (known after apply) + divisor = (known after apply) + resource = (known after apply) } } } + secret { + name = (known after apply) + optional = (known after apply) + items { + key = (known after apply) + mode = (known after apply) + path = (known after apply) } } + service_account_token { + audience = (known after apply) + expiration_seconds = (known after apply) + path = (known after apply) } } } + quobyte { + group = (known after apply) + read_only = (known after apply) + registry = (known after apply) + user = (known after apply) + volume = (known after apply) } + rbd { + ceph_monitors = (known after apply) + fs_type = (known after apply) + keyring = (known after apply) + rados_user = (known after apply) + rbd_image = (known after apply) + rbd_pool = (known after apply) + read_only = (known after apply) + secret_ref { + name = (known after apply) + namespace = (known after apply) } } + secret { + default_mode = (known after apply) + optional = (known after apply) + secret_name = (known after apply) + items { + key = (known after apply) + mode = (known after apply) + path = (known after apply) } } + vsphere_volume { + fs_type = (known after apply) + volume_path = (known after apply) } } } } } } } } # module.s42.module.service-token.kubernetes_manifest.sealed_secret["ovh-s3-credentials"] will be created + resource "kubernetes_manifest" "sealed_secret" { + manifest = { + apiVersion = "bitnami.com/v1alpha1" + kind = "SealedSecret" + metadata = { + annotations = { + "sealedsecrets.bitnami.com/cluster-wide" = "false" + "sealedsecrets.bitnami.com/namespace-wide" = "true" } + name = "ovh-s3-credentials" + namespace = "production" } + spec = { + encryptedData = { + "AWS_ACCESS_KEY_ID" = "AgCcEfb0ziVTopKX359ktRyGfkH7KGldN2xF9F3uLNJ8qkgYCQgjFlvZXcM5IDkSP9ZGEqZKDlkxCQvLLTvtUeWmOmbyDSWtkG9e+pzm5+R1jIg+GLg/zS97Rbm+/v5E/K5NxzB45CDLjQukRHdlNQG6j6XH8A/nopAUKeQXtsTACt85IQw/7LFz83AS3uuylTb9c6yTmRW4C+CZR+TQpfDdRM3Fq2yaZillF0K50Gfh9WF5ahskiqBcNX2xe/WBREQqdE2cnlkMv8/Wzhu4HgsQuwjrm6dkAW2EkeOAWqBFwcSocC1DAeOVQA1xnxlXwA8v3+KXiH54D8nTOjYyAzW5s/QB+S+b/go5ljSsueJKnl88w8du4+C5yKG+gYsQms7PCHV+MkA0/lLIL6FKIcCZjYxmg/jguSROYz/q4zU6IvgPKUhuGU39ggLJEbZnjEjGLFaw+edG2VSuC1tNzELLcEgLiXxzfUsaySB6oGO4vgVbkiLrZxEDain1K+2Gb8MaAvE6iu8T4CVJHkT8r2Zzpa50rCMIdczajiZRilhnnDh/i3hdwl+UNyW/0wtd+X0UmEZV5wDKfq8WqJm3tS0yrJwE+7jvRe2lyoi/CoT0CTKJXE09BVBlB97aC+vmTx/kxRnewLSZD7n0gCjZyLoGEGTm1xmXVMNiIHCj9Bkgo2j1uR7fMvGoQnRL2VaNFEbmJ750GbKMbO9ORhNMFgkzjBmvjTRFneWXTdnVxuNgow==" + "AWS_SECRET_ACCESS_KEY" = "AgCjUZuzryNxDh2sVS8VeOB9dfPk/JLeYMBGpJhs24w5+QuTnDdUdHSEhaRa7pSLm/X5iMdqvQ/8wDC/PlndUM6cEdNaCpAT2q7UA7j5TUfOXBO6e0upIWMbKT+ce2PgPhljDhY4/IxG47qeJiQMepv8T5obn3xvMkMmNu70P9rxNayB0+WinCUJvDzXi4RTrvNf2r8JyBqUZ8nsHZ+Qa35tMr3eKZd/81Po1dw+EMvgJXlrLZMCS/savt8OwCKLQlySUW2vq95m3pbiy85sMhxwVMMcNOCBCcjmx/zF0mIJ1Q8aScOJ/sKG/1+Mo0Yxmh3Hx3eFqrvWBfD6YVVcddzCcjlCuts+8UcLKoZA5a6JWY8Yb9FaH7uCq00EjqeXWEZhPg8vhZxyEtvPIy3AvfXxp9JKJP82NL3KiDsFZ5Nf5Je9/xo2kpSuVgzpERA5NLLKam9jKKIQBySwLH4Yc4iKYVMt6G8qHkqHs4YYSHKS4SujP1XTZ2pisXe0syZrZCoTyKDxQ7ZBr26psM05VaGMctDBubdfCy5uWU4lng27/wI1RyhaZg1ZDeDUgROI4iRgJQ1cL7XY/K9vXLlm2ef87klijGVK4oDBEkGug6LrEHY0jbBuxNGeOYi/oQ50haYuu6pS9uEK2yFp1Ji5TvcKFNdTu8cDLgbboRB3/hdTYaeRHw8TPs2dVfsBWHILgDtIwU3QwUZzud6gOWMbt+XZVF/irGIqHMg1fRO+i2boEg==" } + template = { + metadata = { + annotations = { + "reflector.v1.k8s.emberstack.com/reflection-allowed" = "true" + "reflector.v1.k8s.emberstack.com/reflection-allowed-namespaces" = "staging" + "reflector.v1.k8s.emberstack.com/reflection-auto-enabled" = "true" + "reflector.v1.k8s.emberstack.com/reflection-auto-namespaces" = "staging" + "sealedsecrets.bitnami.com/cluster-wide" = "false" + "sealedsecrets.bitnami.com/namespace-wide" = "true" } + name = "ovh-s3-credentials" + namespace = "production" } + type = "Opaque" } } } + object = { + apiVersion = "bitnami.com/v1alpha1" + kind = "SealedSecret" + metadata = { + annotations = (known after apply) + creationTimestamp = (known after apply) + deletionGracePeriodSeconds = (known after apply) + deletionTimestamp = (known after apply) + finalizers = (known after apply) + generateName = (known after apply) + generation = (known after apply) + labels = (known after apply) + managedFields = (known after apply) + name = "ovh-s3-credentials" + namespace = "production" + ownerReferences = (known after apply) + resourceVersion = (known after apply) + selfLink = (known after apply) + uid = (known after apply) } + spec = { + data = (known after apply) + encryptedData = { + "AWS_ACCESS_KEY_ID" = "AgCcEfb0ziVTopKX359ktRyGfkH7KGldN2xF9F3uLNJ8qkgYCQgjFlvZXcM5IDkSP9ZGEqZKDlkxCQvLLTvtUeWmOmbyDSWtkG9e+pzm5+R1jIg+GLg/zS97Rbm+/v5E/K5NxzB45CDLjQukRHdlNQG6j6XH8A/nopAUKeQXtsTACt85IQw/7LFz83AS3uuylTb9c6yTmRW4C+CZR+TQpfDdRM3Fq2yaZillF0K50Gfh9WF5ahskiqBcNX2xe/WBREQqdE2cnlkMv8/Wzhu4HgsQuwjrm6dkAW2EkeOAWqBFwcSocC1DAeOVQA1xnxlXwA8v3+KXiH54D8nTOjYyAzW5s/QB+S+b/go5ljSsueJKnl88w8du4+C5yKG+gYsQms7PCHV+MkA0/lLIL6FKIcCZjYxmg/jguSROYz/q4zU6IvgPKUhuGU39ggLJEbZnjEjGLFaw+edG2VSuC1tNzELLcEgLiXxzfUsaySB6oGO4vgVbkiLrZxEDain1K+2Gb8MaAvE6iu8T4CVJHkT8r2Zzpa50rCMIdczajiZRilhnnDh/i3hdwl+UNyW/0wtd+X0UmEZV5wDKfq8WqJm3tS0yrJwE+7jvRe2lyoi/CoT0CTKJXE09BVBlB97aC+vmTx/kxRnewLSZD7n0gCjZyLoGEGTm1xmXVMNiIHCj9Bkgo2j1uR7fMvGoQnRL2VaNFEbmJ750GbKMbO9ORhNMFgkzjBmvjTRFneWXTdnVxuNgow==" + "AWS_SECRET_ACCESS_KEY" = "AgCjUZuzryNxDh2sVS8VeOB9dfPk/JLeYMBGpJhs24w5+QuTnDdUdHSEhaRa7pSLm/X5iMdqvQ/8wDC/PlndUM6cEdNaCpAT2q7UA7j5TUfOXBO6e0upIWMbKT+ce2PgPhljDhY4/IxG47qeJiQMepv8T5obn3xvMkMmNu70P9rxNayB0+WinCUJvDzXi4RTrvNf2r8JyBqUZ8nsHZ+Qa35tMr3eKZd/81Po1dw+EMvgJXlrLZMCS/savt8OwCKLQlySUW2vq95m3pbiy85sMhxwVMMcNOCBCcjmx/zF0mIJ1Q8aScOJ/sKG/1+Mo0Yxmh3Hx3eFqrvWBfD6YVVcddzCcjlCuts+8UcLKoZA5a6JWY8Yb9FaH7uCq00EjqeXWEZhPg8vhZxyEtvPIy3AvfXxp9JKJP82NL3KiDsFZ5Nf5Je9/xo2kpSuVgzpERA5NLLKam9jKKIQBySwLH4Yc4iKYVMt6G8qHkqHs4YYSHKS4SujP1XTZ2pisXe0syZrZCoTyKDxQ7ZBr26psM05VaGMctDBubdfCy5uWU4lng27/wI1RyhaZg1ZDeDUgROI4iRgJQ1cL7XY/K9vXLlm2ef87klijGVK4oDBEkGug6LrEHY0jbBuxNGeOYi/oQ50haYuu6pS9uEK2yFp1Ji5TvcKFNdTu8cDLgbboRB3/hdTYaeRHw8TPs2dVfsBWHILgDtIwU3QwUZzud6gOWMbt+XZVF/irGIqHMg1fRO+i2boEg==" } + template = { + data = (known after apply) + metadata = { + annotations = { + "reflector.v1.k8s.emberstack.com/reflection-allowed" = "true" + "reflector.v1.k8s.emberstack.com/reflection-allowed-namespaces" = "staging" + "reflector.v1.k8s.emberstack.com/reflection-auto-enabled" = "true" + "reflector.v1.k8s.emberstack.com/reflection-auto-namespaces" = "staging" + "sealedsecrets.bitnami.com/cluster-wide" = "false" + "sealedsecrets.bitnami.com/namespace-wide" = "true" } + name = "ovh-s3-credentials" + namespace = "production" } + type = "Opaque" } } } } # module.s42.module.webhooks_processor.kubernetes_deployment.app[0] will be updated in-place ~ resource "kubernetes_deployment" "app" { id = "production/webhooks-processor" # (1 unchanged attribute hidden) ~ metadata { ~ labels = { ~ "app.kubernetes.io/version" = "v0.23" -> "latest" ~ "version" = "v0.23" -> "latest" # (5 unchanged elements hidden) } name = "webhooks-processor" # (5 unchanged attributes hidden) } ~ spec { # (5 unchanged attributes hidden) ~ template { ~ metadata { ~ labels = { ~ "version" = "v0.23" -> "latest" # (5 unchanged elements hidden) } # (2 unchanged attributes hidden) } ~ spec { # (11 unchanged attributes hidden) ~ container { ~ image = "ghcr.io/42atomys/stud42:v0.23" -> "ghcr.io/42atomys/stud42:latest" name = "webhooks-processor" # (8 unchanged attributes hidden) # (20 unchanged blocks hidden) } # (3 unchanged blocks hidden) } } # (2 unchanged blocks hidden) } } # module.s42.module.webhooks_processor.kubernetes_horizontal_pod_autoscaler_v2.app[0] will be updated in-place ~ resource "kubernetes_horizontal_pod_autoscaler_v2" "app" { id = "production/webhooks-processor" ~ metadata { ~ labels = { ~ "app.kubernetes.io/version" = "v0.23" -> "latest" ~ "version" = "v0.23" -> "latest" # (5 unchanged elements hidden) } name = "webhooks-processor" # (5 unchanged attributes hidden) } # (1 unchanged block hidden) } Plan: 10 to add, 16 to change, 0 to destroy. Warning: Attribute not found in schema with module.s42.kubernetes_manifest.rabbitmq_exchange_webhooks, on s42/broker.tf line 10, in resource "kubernetes_manifest" "rabbitmq_exchange_webhooks": 10: resource "kubernetes_manifest" "rabbitmq_exchange_webhooks" { Unable to find schema type for attribute: metadata.clusterName (and 13 more similar warnings elsewhere) ───────────────────────────────────────────────────────────────────────────── Saved the plan to: apps-tfplan To perform exactly these actions, run the following command to apply: terraform apply "apps-tfplan" ```