aknuds1 / do-spaces-tool

0 stars 0 forks source link

Error when uploading files to DO Spaces #1

Closed grdavies closed 6 years ago

grdavies commented 6 years ago

When used as part of the tectonic-installer deploying to DO the configuration of the master server fails as assets.zip and kubeconfig are not present in the bucket created by terraform earlier in the script.

When using this container locally with the following command; docker run -t --net=host -e ACCESS_KEY_ID -e SECRET_ACCESS_KEY -e REGION -v /tmp:/spaces aknudsen/do-spaces-tool:0.2.0 upload /spaces/test.txt atectonic-1018074834beb35b2f97d5dee5f9283b/test.txt

I receive the following error;

Uploading file /spaces/test.txt...
Traceback (most recent call last):
  File "/usr/local/lib/python3.6/site-packages/boto3/s3/transfer.py", line 275, in upload_file
    future.result()
  File "/usr/local/lib/python3.6/site-packages/s3transfer/futures.py", line 73, in result
    return self._coordinator.result()
  File "/usr/local/lib/python3.6/site-packages/s3transfer/futures.py", line 233, in result
    raise self._exception
  File "/usr/local/lib/python3.6/site-packages/s3transfer/tasks.py", line 126, in __call__
    return self._execute_main(kwargs)
  File "/usr/local/lib/python3.6/site-packages/s3transfer/tasks.py", line 150, in _execute_main
    return_value = self._main(**kwargs)
  File "/usr/local/lib/python3.6/site-packages/s3transfer/upload.py", line 679, in _main
    client.put_object(Bucket=bucket, Key=key, Body=body, **extra_args)
  File "/usr/local/lib/python3.6/site-packages/botocore/client.py", line 314, in _api_call
    return self._make_api_call(operation_name, kwargs)
  File "/usr/local/lib/python3.6/site-packages/botocore/client.py", line 612, in _make_api_call
    raise error_class(parsed_response, operation_name)
botocore.exceptions.ClientError: An error occurred (InvalidArgument) when calling the PutObject operation: Server Side Encryption with KMS managed key requires HTTP header x-amz-server-side-encryption : aws:kms

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "do-spaces-tool.py", line 65, in <module>
    args.func(args)
  File "do-spaces-tool.py", line 33, in cmd_upload
    'ServerSideEncryption': 'AES256',
  File "/usr/local/lib/python3.6/site-packages/boto3/s3/inject.py", line 110, in upload_file
    extra_args=ExtraArgs, callback=Callback)
  File "/usr/local/lib/python3.6/site-packages/boto3/s3/transfer.py", line 283, in upload_file
    filename, '/'.join([bucket, key]), e))
boto3.exceptions.S3UploadFailedError: Failed to upload /spaces/test.txt to atectonic-1018074834beb35b2f97d5dee5f9283b/test.txt: An error occurred (InvalidArgument) when calling the PutObject operation: Server Side Encryption with KMS managed key requires HTTP header x-amz-server-side-encryption : aws:kms
grdavies commented 6 years ago

I'll try to take a better look at this over the weekend - I just wanted to let you know asap. I'm flying to a meeting and will not get any spare time until tomorrow evening at the earliest!

aknuds1 commented 6 years ago

Interesting, never seen this issue before! Thanks for reporting. Maybe I'll try to spin up a test cluster to see if this happens to me as well.

aknuds1 commented 6 years ago

I just spun up a cluster on DigitalOcean using my branch, and it worked just fine. Could it be you have some sort of configuration issue? Can I see your config? Are you using the same branch?

grdavies commented 6 years ago

Wow... I have literally no idea for why mine fails. I am using the same branch as you (commit: 938d857c9c5588bd4806318b470f844cf0d7de60) and just tried to deploy another cluster with no luck. I just redeployed the VM that I'm running the installer from in case there was an issue there but the same issue occurs during cluster deployment. Seeing as this works for you and not me I've got to believe that its either something up with where I'm running the installer from or my DO account. I'll raise a support ticket with DO and see what they say - thanks for your assistance tho, if you see anything below that jumps out at you please let me know!

tectonic-installer log snippet

null_resource.kubeconfig: Still creating... (10s elapsed)
null_resource.kubeconfig (local-exec): + docker run -t --net=host -e ACCESS_KEY_ID -e SECRET_ACCESS_KEY -e REGION -v /tmp:/spaces aknudsen/do-spaces-tool:0.2.0 upload /spaces/kubeconfig atectonic-1018074834beb35b2f97d5dee5f9283b/kubeconfig
module.masters.digitalocean_loadbalancer.console: Still creating... (20s elapsed)
null_resource.kubeconfig (local-exec): Uploading file /spaces/kubeconfig...
null_resource.kubeconfig (local-exec): Traceback (most recent call last):
null_resource.kubeconfig (local-exec):   File "/usr/local/lib/python3.6/site-packages/boto3/s3/transfer.py", line 275, in upload_file
null_resource.kubeconfig (local-exec):     future.result()
null_resource.kubeconfig (local-exec):   File "/usr/local/lib/python3.6/site-packages/s3transfer/futures.py", line 73, in result
null_resource.kubeconfig (local-exec):     return self._coordinator.result()
null_resource.kubeconfig (local-exec):   File "/usr/local/lib/python3.6/site-packages/s3transfer/futures.py", line 233, in result
null_resource.kubeconfig (local-exec):     raise self._exception
null_resource.kubeconfig (local-exec):   File "/usr/local/lib/python3.6/site-packages/s3transfer/tasks.py", line 126, in __call__
null_resource.kubeconfig (local-exec):     return self._execute_main(kwargs)
null_resource.kubeconfig (local-exec):   File "/usr/local/lib/python3.6/site-packages/s3transfer/tasks.py", line 150, in _execute_main
null_resource.kubeconfig (local-exec):     return_value = self._main(**kwargs)
null_resource.kubeconfig (local-exec):   File "/usr/local/lib/python3.6/site-packages/s3transfer/upload.py", line 679, in _main
null_resource.kubeconfig (local-exec):     client.put_object(Bucket=bucket, Key=key, Body=body, **extra_args)
null_resource.kubeconfig (local-exec):   File "/usr/local/lib/python3.6/site-packages/botocore/client.py", line 314, in _api_call
null_resource.kubeconfig (local-exec):     return self._make_api_call(operation_name, kwargs)
null_resource.kubeconfig (local-exec):   File "/usr/local/lib/python3.6/site-packages/botocore/client.py", line 612, in _make_api_call
null_resource.kubeconfig (local-exec):     raise error_class(parsed_response, operation_name)
null_resource.kubeconfig (local-exec): botocore.exceptions.ClientError: An error occurred (InvalidArgument) when calling the PutObject operation: Server Side Encryption with KMS managed key requires HTTP header x-amz-server-side-encryption : aws:kms

null_resource.kubeconfig (local-exec): During handling of the above exception, another exception occurred:

null_resource.kubeconfig (local-exec): Traceback (most recent call last):
null_resource.kubeconfig (local-exec):   File "do-spaces-tool.py", line 65, in <module>
null_resource.kubeconfig (local-exec):     args.func(args)
null_resource.kubeconfig (local-exec):   File "do-spaces-tool.py", line 33, in cmd_upload
null_resource.kubeconfig (local-exec):     'ServerSideEncryption': 'AES256',
null_resource.kubeconfig (local-exec):   File "/usr/local/lib/python3.6/site-packages/boto3/s3/inject.py", line 110, in upload_file
null_resource.kubeconfig (local-exec):     extra_args=ExtraArgs, callback=Callback)
null_resource.kubeconfig (local-exec):   File "/usr/local/lib/python3.6/site-packages/boto3/s3/transfer.py", line 283, in upload_file
null_resource.kubeconfig (local-exec):     filename, '/'.join([bucket, key]), e))
null_resource.kubeconfig (local-exec): boto3.exceptions.S3UploadFailedError: Failed to upload /spaces/kubeconfig to atectonic-1018074834beb35b2f97d5dee5f9283b/kubeconfig: An error occurred (InvalidArgument) when calling the PutObject operation: Server Side Encryption with KMS managed key requires HTTP header x-amz-server-side-encryption : aws:kms
null_resource.kubeconfig (local-exec): + rm -f /tmp/kubeconfig
null_resource.kubeconfig: Creation complete after 15s (ID: 6653233713300801319)

terraform.tfvars

// Your Tectonic cluster administration email address
tectonic_admin_email = "REDACTED"

// Your desired Tectonic cluster administration password
tectonic_admin_password = "REDACTED"

// Your API token for DigitalOcean
tectonic_do_token = "REDACTED"

// Access key ID and secret access key for DigitalOcean Spaces.
// The Tectonic Installer uses a Spaces bucket to store Tectonic assets and kubeconfig.
tectonic_do_spaces_access_key_id = "REDACTED"
tectonic_do_spaces_secret_access_key = "REDACTED"

// Instance size for the etcd node(s). Read the [etcd recommended hardware](https://coreos.com/etcd/docs/latest/op-guide/hardware.html) guide for best performance
tectonic_do_etcd_droplet_size = "1gb"

// (optional) Instance size for the master node(s).
tectonic_do_master_droplet_size = "1gb"

// (optional) Instance size for the worker node(s).
tectonic_do_worker_droplet_size = "1gb"

// A list of DigitalOcean SSH IDs to enable in the created droplets.
tectonic_do_ssh_keys = [17921485, 17923378]

// (optional) The region to create your droplets in.
tectonic_do_droplet_region = "sfo2"

// The base DNS domain of the cluster. It must NOT contain a trailing period. Some
// DNS providers will automatically add this if necessary.
//
// Example: `openstack.dev.coreos.systems`.
//
// Note: This field MUST be set manually prior to creating the cluster.
// This applies only to cloud platforms.
//
// [Azure-specific NOTE]
// To use Azure-provided DNS, `tectonic_base_domain` should be set to `""`
// If using DNS records, ensure that `tectonic_base_domain` is set to a properly configured external DNS zone.
// Instructions for configuring delegated domains for Azure DNS can be found here: https://docs.microsoft.com/en-us/azure/dns/dns-delegate-domain-azure-dns
tectonic_base_domain = "rossdavies.info"

// (optional) The content of the PEM-encoded CA certificate, used to generate Tectonic Console's server certificate.
// If left blank, a CA certificate will be automatically generated.
// tectonic_ca_cert = ""

// (optional) The content of the PEM-encoded CA key, used to generate Tectonic Console's server certificate.
// This field is mandatory if `tectonic_ca_cert` is set.
// tectonic_ca_key = ""

// (optional) The algorithm used to generate tectonic_ca_key.
// The default value is currently recommended.
// This field is mandatory if `tectonic_ca_cert` is set.
// tectonic_ca_key_alg = "RSA"

// (optional) This declares the IP range to assign Kubernetes pod IPs in CIDR notation.
// tectonic_cluster_cidr = "10.2.0.0/16"

// The name of the cluster.
// If used in a cloud-environment, this will be prepended to `tectonic_base_domain` resulting in the URL to the Tectonic console.
//
// Note: This field MUST be set manually prior to creating the cluster.
// Warning: Special characters in the name like '.' may cause errors on OpenStack platforms due to resource name constraints.
tectonic_cluster_name = "tectonic"

// (optional) The Container Linux update channel.
//
// Examples: `stable`, `beta`, `alpha`
// tectonic_container_linux_channel = "stable"

// The Container Linux version to use. Set to `latest` to select the latest available version for the selected update channel.
//
// Examples: `latest`, `1465.6.0`
tectonic_container_linux_version = "latest"

// (optional) A list of PEM encoded CA files that will be installed in /etc/ssl/certs on etcd, master, and worker nodes.
// tectonic_custom_ca_pem_list = ""

// (optional) This only applies if you use the modules/dns/ddns module.
//
// Specifies the RFC2136 Dynamic DNS server key algorithm.
// tectonic_ddns_key_algorithm = ""

// (optional) This only applies if you use the modules/dns/ddns module.
//
// Specifies the RFC2136 Dynamic DNS server key name.
// tectonic_ddns_key_name = ""

// (optional) This only applies if you use the modules/dns/ddns module.
//
// Specifies the RFC2136 Dynamic DNS server key secret.
// tectonic_ddns_key_secret = ""

// (optional) This only applies if you use the modules/dns/ddns module.
//
// Specifies the RFC2136 Dynamic DNS server IP/host to register IP addresses to.
// tectonic_ddns_server = ""

// (optional) DNS prefix used to construct the console and API server endpoints.
// tectonic_dns_name = ""

// (optional) The size in MB of the PersistentVolume used for handling etcd backups.
// tectonic_etcd_backup_size = "512"

// (optional) The name of an existing Kubernetes StorageClass that will be used for handling etcd backups.
// tectonic_etcd_backup_storage_class = ""

// (optional) The path of the file containing the CA certificate for TLS communication with etcd.
//
// Note: This works only when used in conjunction with an external etcd cluster.
// If set, the variable `tectonic_etcd_servers` must also be set.
// tectonic_etcd_ca_cert_path = "/dev/null"

// (optional) The path of the file containing the client certificate for TLS communication with etcd.
//
// Note: This works only when used in conjunction with an external etcd cluster.
// If set, the variables `tectonic_etcd_servers`, `tectonic_etcd_ca_cert_path`, and `tectonic_etcd_client_key_path` must also be set.
// tectonic_etcd_client_cert_path = "/dev/null"

// (optional) The path of the file containing the client key for TLS communication with etcd.
//
// Note: This works only when used in conjunction with an external etcd cluster.
// If set, the variables `tectonic_etcd_servers`, `tectonic_etcd_ca_cert_path`, and `tectonic_etcd_client_cert_path` must also be set.
// tectonic_etcd_client_key_path = "/dev/null"

// The number of etcd nodes to be created.
// If set to zero, the count of etcd nodes will be determined automatically.
//
// Note: This is not supported on bare metal.
tectonic_etcd_count = "0"

// (optional) List of external etcd v3 servers to connect with (hostnames/IPs only).
// Needs to be set if using an external etcd cluster.
// Note: If this variable is defined, the installer will not create self-signed certs.
// To provide a CA certificate to trust the etcd servers, set "tectonic_etcd_ca_cert_path".
//
// Example: `["etcd1", "etcd2", "etcd3"]`
// tectonic_etcd_servers = ""

// (optional) If set to `true`, all etcd endpoints will be configured to use the "https" scheme.
//
// Note: If `tectonic_experimental` is set to `true` this variable has no effect, because the experimental self-hosted etcd always uses TLS.
// tectonic_etcd_tls_enabled = true

// The path to the tectonic licence file.
// You can download the Tectonic license file from your Account overview page at [1].
//
// [1] https://account.coreos.com/overview
tectonic_license_path = "/root/tectonic-installer/build/prod/tectonic-license.txt"

// The number of master nodes to be created.
// This applies only to cloud platforms.
tectonic_master_count = "1"

// (optional) Configures the network to be used in Tectonic. One of the following values can be used:
//
// - "flannel": enables overlay networking only. This is implemented by flannel using VXLAN.
//
// - "canal": [ALPHA] enables overlay networking including network policy. Overlay is implemented by flannel using VXLAN. Network policy is implemented by Calico.
//
// - "calico": [ALPHA] enables BGP based networking. Routing and network policy is implemented by Calico. Note this has been tested on baremetal installations only.
//
// - "none": disables the installation of any Pod level networking layer provided by Tectonic. By setting this value, users are expected to deploy their own solution to enable network connectivity for Pods and Services.
// tectonic_networking = "flannel"

// The path the pull secret file in JSON format.
// This is known to be a "Docker pull secret" as produced by the docker login [1] command.
// A sample JSON content is shown in [2].
// You can download the pull secret from your Account overview page at [3].
//
// [1] https://docs.docker.com/engine/reference/commandline/login/
//
// [2] https://coreos.com/os/docs/latest/registry-authentication.html#manual-registry-auth-setup
//
// [3] https://account.coreos.com/overview
tectonic_pull_secret_path = "/root/tectonic-installer/build/prod/config.json"

// (optional) This declares the IP range to assign Kubernetes service cluster IPs in CIDR notation.
// The maximum size of this IP range is /12
// tectonic_service_cidr = "10.3.0.0/16"

// Validity period of the self-signed certificates (in hours).
// Default is 3 years.
// This setting is ignored if user provided certificates are used.
tectonic_tls_validity_period = "26280"

// The number of worker nodes to be created.
// This applies only to cloud platforms.
tectonic_worker_count = "3"
grdavies commented 6 years ago

So... I removed the ExtraArgs statement from the file upload component of the script , ExtraArgs={ 'ServerSideEncryption': 'AES256', } to match the published DO spaces boto3 documentation (https://www.digitalocean.com/community/questions/how-to-use-digitalocean-spaces-with-the-aws-s3-sdks) and using my local copy of the container and I'm now able to upload to Spaces...

#!/usr/bin/env python3
import argparse
import os.path
import boto3.session
from botocore.client import Config

REGION = os.environ.get('region', 'ams3')
ACCESS_KEY_ID = os.environ['ACCESS_KEY_ID']
SECRET_ACCESS_KEY = os.environ['SECRET_ACCESS_KEY']

def cmd_upload(args):
    file_path = os.path.abspath(args.file)
    bucket, key = args.location.split('/', 1)
    assert bucket
    assert key

    session = boto3.session.Session()
    client = session.client(
        's3', region_name=REGION,
        endpoint_url='https://{}.digitaloceanspaces.com'.format(REGION),
        aws_access_key_id=ACCESS_KEY_ID,
        aws_secret_access_key=SECRET_ACCESS_KEY,
    )
    resp = client.list_buckets()
    if bucket not in [x['Name'] for x in resp['Buckets']]:
        print('Creating bucket \'{}\''.format(bucket))
        client.create_bucket(Bucket=bucket)

    print('Uploading file {}...'.format(file_path))
    client.upload_file(file_path, bucket, key)

def cmd_download(args):
    bucket, key = args.location.split('/', 1)

    session = boto3.session.Session()
    client = session.client(
        's3', region_name=REGION,
        endpoint_url='https://{}.digitaloceanspaces.com'.format(REGION),
        aws_access_key_id=ACCESS_KEY_ID,
        aws_secret_access_key=SECRET_ACCESS_KEY,
    )
    print('Downloading file {}...'.format(args.destination))
    client.download_file(bucket, key, args.destination)

cl_parser = argparse.ArgumentParser()
sub_cl_parsers = cl_parser.add_subparsers()

upload_cl_parser = sub_cl_parsers.add_parser('upload')
upload_cl_parser.add_argument('file')
upload_cl_parser.add_argument('location')
upload_cl_parser.set_defaults(func=cmd_upload)

download_cl_parser = sub_cl_parsers.add_parser('download')
download_cl_parser.add_argument('location')
download_cl_parser.add_argument('destination')
download_cl_parser.set_defaults(func=cmd_download)

args = cl_parser.parse_args()
args.func(args)
aknuds1 commented 6 years ago

@grdavies Interesting find! I'm gonna give it a try, maybe it's just extraneous on DO!

aknuds1 commented 6 years ago

I can confirm it works with your fix! Thanks!

aknuds1 commented 6 years ago

Fixed by 2dc768e396a4013d81224e7f8b85ff10d03b0807.