vultr / terraform-provider-vultr

Terraform Vultr provider
https://www.terraform.io/docs/providers/vultr/
Mozilla Public License 2.0
192 stars 65 forks source link

Object storage bucket resource #55

Open davidsbond opened 4 years ago

davidsbond commented 4 years ago

Using this provider plugin I can enable object storage. However, it does not seem to have any resources for creating the individual buckets themselves? This means that while I can enable the storage I still have to create buckets manually.

Is there a plan to add a resource for this?

ddymko commented 4 years ago

We are looking into adding support for interacting with buckets but we do not have a set release on this

johnrichardrinehart commented 3 years ago

@ddymko Would you accept a PR for this?

ddymko commented 3 years ago

@johnrichardrinehart Yeah definitely!

I haven't had time to sit down and implement this. For the bucket interaction we would have to use https://github.com/aws/aws-sdk-go

johnrichardrinehart commented 3 years ago

I see. I'll work on implementing this over the next week.

bryanherger commented 2 years ago

A workaround is to use a "minio" provider to create an S3 bucket resource. This example main.tf worked for me:

terraform {
  required_providers {
    vultr = {
      source = "vultr/vultr"
      version = "2.9.1"
    }
    minio = {
      # ATTENTION: use the current version here!
      version = "0.1.0"
      source  = "refaktory/minio"
    }
  }
}

provider "vultr" {
  # In your .bashrc you need to set
  # export VULTR_API_KEY=""
}

# use MinIO provider to manage Vultr object sotrage
provider "minio" {
  endpoint   = "ewr1.vultrobjects.com"
  ssl        = true
  access_key = vultr_object_storage.example_objects.s3_access_key
  secret_key = vultr_object_storage.example_objects.s3_secret_key
}

resource "vultr_object_storage" "example_objects" {
    cluster_id = 2
    label = "exampletfstorage"
}

resource "minio_bucket" "example_s3_bucket" {
  name = "exampletfbucket"
}
tojkee commented 1 year ago

@johnrichardrinehart this is the longest week i have ever seen Do you have any updates?

johnrichardrinehart commented 1 year ago

@tojkee I may have gotten distracted... Anyone else is free to pick this up and should assume that I need more time this week...

dsander commented 1 year ago

Having this feature would be great. A bit of context why the posted workaround doesn't always work. We are creating compute instances and object storage dynamically based on an external configuration. This means using a second provider for the bucket creation is not an option. We worked around it by creating the bucket in the application that uses the object storage (which work great because the access key isn't limited in any way).

The second issue is that deleting an object storage without first deleting it's buckets does not free up the bucket to be re-created right away (it takes somewhere between 48 and 72 hours to become available again). As a workaround we using a providsioner to delete the buckets before deleting the object storage:


resource "vultr_object_storage" "os" {
  cluster_id = data.vultr_object_storage_cluster.storage_cluster.id
  label      = "myobjectstorage"

  # Vultr does not allow to reuse bucket names right away after deleting a object storage, we have to delete the buckets first
  provisioner "local-exec" {
    when       = destroy
    on_failure = fail
    command    = "./empty_object_storage.sh ${self.s3_access_key} ${self.s3_secret_key} 'us-east-1' 'https://${self.s3_hostname}'"
  }
}
#!/usr/bin/env bash

set -e
set -o pipefail

if ! [ -x "$(command -v s3cmd)" ]; then
  echo 'Error: s3cmd is not installed.' >&2
  exit 1
fi

ACCESS_KEY=$1
SECRET_KEY=$2
REGION=$3
ENDPOINT=$4

buckets=$(s3cmd --access_key="${ACCESS_KEY}" --secret_key="${SECRET_KEY}" --host="${ENDPOINT}" --host-bucket="${ENDPOINT}" --region="${REGION}" ls | awk '{print $3}')

if [ -z "${buckets}" ]; then
  echo "No buckets to delete"
  exit 0
fi

while read -r bucket; do
  echo "Deleting bucket: '${bucket}'."
  s3cmd --access_key="${ACCESS_KEY}" --secret_key="${SECRET_KEY}" --host="${ENDPOINT}" --host-bucket="${ENDPOINT}" --region="${REGION}" rb --force --recursive "${bucket}"
done <<< "${buckets}"