CI/CD for Terraform is tricky. To make life easier, specialised CI systems aka TACOS exist - Terraform Cloud, Spacelift, Atlantis, etc.
But why have 2 CI systems? Why not reuse the async jobs infrastructure with compute, orchestration, logs, etc of your existing CI?
Digger runs terraform natively in your CI. This is:
No need to host and maintain a server
Secure by design
Scalable compute with jobs isolation
Role-based access control via OPA
Read more about differences with Atlantis in our blog post
Open source
No duplication of the CI/CD stack
Secrets not shared with a third party
We are currently designing Digger to be Multi-CI, so that in addition to GitHub Actions, you can run Terraform/OpenTofu within other CI’s such as Gitlab CI, Azure DevOps, Bitbucket, TeamCity, Circle CI and Jenkins, while still having the option to orchestrate jobs using Digger’s Orchestrator Backend.
Read more in this blog, and please share your requirement on Slack if you require support for other CI’s. Your feedback/insight would help us a lot as this feature is in active development.
The installation must be executed in two steps, as explaned in the Digger official documentation:
digger-backend
helm chart from https://diggerhq.github.io/helm-charts/, leaving empty all the data related to the GitHub Appyour_digger_hostname/github/setup
to install and configure the GitHub AppTo configure the Digger backend deployment with the Helm chart, you'll need to set several values in the values.yaml
file. Below are the key configurations to consider:
digger.image.repository
: The Docker image repository for the Digger backend (e.g., registry.digger.dev/diggerhq/digger_backend
).
digger.image.tag
: The specific version tag of the Docker image to deploy (e.g., "v0.4.2"
).
digger.service.type
: The type of Kubernetes service to create, such as ClusterIP
, NodePort
, or LoadBalancer
.
digger.service.port
: The port number that the service will expose (e.g., 3000
).
digger.ingress.enabled
: Set to true
to create an Ingress for the service.
digger.annotations
: Add the needed annotations based on your ingress controller configuration.
digger.ingress.host
: The hostname to use for the Ingress resource (e.g., digger-backend.test
).
digger.ingress.path
: The path for the Ingress resource (e.g., /
).
digger.ingress.className
: the classname to use for ingress (only considered for kuberetes >= 1.18)
digger.ingress.tls.secretName
: The name of the TLS secret to use for Ingress encryption (e.g., digger-backend-tls
).
digger.secret.*
: Various secrets needed for the application, such as HTTP_BASIC_AUTH_PASSWORD
and BEARER_AUTH_TOKEN
. You can provide them directly or reference an existing Kubernetes secret by setting useExistingSecret
to true
and specifying existingSecretName
.
digger.postgres.*
: If you're using an external Postgres database, configure the user
, database
, and host
accordingly. Ensure you provide the password
either directly or through an existing secret in the secret.*
section.
Remember to replace placeholders and default values with your specific, sensitive information before deploying the chart. For example, it's essential to generate a strong bearerAuthToken
and postgresPassword
rather than using the defaults for security reasons.
You can also deploy a PostgreSQL database ONLY FOR TEST PURPOSES configuring the postgres.*
section:
postgres.enabled
: Set to true
if you want to deploy a postgres databasepostgres.secret.*
: As for the digger secret, you can pass the postgres
user password directly or through an existing secret