labd / terraform-provider-storyblok

Terraform provider to manage Storyblok
https://registry.terraform.io/providers/labd/storyblok/latest/docs
Mozilla Public License 2.0
2 stars 0 forks source link
storyblok terraform-provider

Storyblok Terraform Provider

Test status codecov Go Report Card

The Terraform Storyblok provider allows you to configure your storyblok space with infrastructure-as-code principles.

Commercial support

Need support implementing this terraform module in your organization? We are able to offer support. Please contact us at opensource@labdigital.nl

Quick start

Read our documentation and check out the examples.

Usage

The provider is distributed via the Terraform registry. To use it you need to configure the required_provider block. For example:

terraform {
  required_providers {
    storyblok = {
      source = "labd/storyblok"

      # It's recommended to pin the version, e.g.:
      # version = "~> 0.0.1"
    }
  }
}

Binaries

Packages of the releases are available at https://github.com/labd/terraform-provider-storyblok/releases See the terraform documentation for more information about installing third-party providers.

Contributing

Building the provider

Clone the repository and run the following command:

$ task build-local

Debugging / Troubleshooting

There are two environment settings for troubleshooting:

Note this generates a lot of output!

Releasing

Install "changie"

brew tap miniscruff/changie https://github.com/miniscruff/changie
brew install changie

Add unreleased change files by running for each change (add/fix/remove/etc.)

changie new

Commit this and a new PR will be created.

Once that's merged and its Github action is complete, a new release will be live.

Testing

Running unit tests

$ task test

Running acceptance tests

$ task testacc

Note that acceptance tests by default run based on pre-recorded results. The test stubs can be found in [internal/assets] (./internal/assets). A good habit is to create a separate stub file per test case, as otherwise there might be conflicts when multiple tests are run in parallel.

When adding or updating tests locally you can set RECORD=true to re-record results. This will clear all previous results and create a new snapshot of the API interaction.

Authors

This project is developed by Lab Digital. We welcome additional contributors. Please see our GitHub repository for more information.