GoogleCloudPlatform / berglas

A tool for managing secrets on Google Cloud
https://cloud.google.com/secret-manager
Apache License 2.0
1.24k stars 96 forks source link

Using berglas on Container-Optimized OS #24

Closed caquino closed 5 years ago

caquino commented 5 years ago

Hi,

I've been trying to use berglas on a Container-Optimized OS but it fails to detect the runtime, is that by design or is this unexpected?

While examining my docker logs for the container, I see the following error message:

failed to detect runtime environment: unknown runtime

It's related to this code which made me wonder if it's design or not.

Or should --local flag be used for Container-Optimized OS as it looks like I could not spot a reliable way to identify the runtime for it other than talking to the metadata service.

Thanks!

sethvargo commented 5 years ago

Hi @caquino

Thanks for opening an issue. From where are you running Container-Optimized OS? Is this inside GKE or GCP?

caquino commented 5 years ago

Hi @sethvargo

I'm running it directly on GCP and I've been bumping in corners while trying to make it work.

I'm provisioning a GCP instance running the Container-Optimized OS and starting a container running berglas in exec mode for Atlantis.

I managed to run it with the --local flag, but for then I started to bump on some other issues, like for example that containers running on the Container-Optimized OS can't easily use the service account linked to the instance.

Any pointers on the right direction will be more than welcome, thanks!

sethvargo commented 5 years ago

Berglas only auto-detects Cloud Functions and Cloud Run because, to the best of my knowledge, you can't set environment variables on an instance. I was investigating using instance metadata as an alternative, but then the destination is unclear.

Can you share some of the errors you are getting? You shouldn't need the service account directly, everything should work provided it has the right permissions.

caquino commented 5 years ago

Actually, you can set environment variables for the containers running on the instance, but as you said for the instance would require metadata/cloud-config, this is the gcloud command used, I'm going to change some of the data, but I can share all the info if necessary.

gcloud beta compute --project=$(PROJECT_ID) instances create-with-container atlantis \
    --zone=us-central1-c --machine-type=f1-micro \
    --metadata=google-logging-enabled=true --maintenance-policy=MIGRATE \
    --service-account=<service account> \
    --scopes=https://www.googleapis.com/auth/devstorage.read_only,\
    https://www.googleapis.com/auth/logging.write,\
    https://www.googleapis.com/auth/monitoring.write,\
    https://www.googleapis.com/auth/servicecontrol,\
    https://www.googleapis.com/auth/service.management.readonly,\
    https://www.googleapis.com/auth/trace.append, \
    --container-image=gcr.io/<repo>/atlantis:latest \
    --container-restart-policy=always \
    --container-env="ATLANTIS_GH_TOKEN=berglas://<bucket>/gh-token,ATLANTIS_GH_USER=berglas://<bucket>/gh-user,ATLANTIS_GH_WEBHOOK_SECRET=berglas://<bucket>/gh-webhook-secret,ATLANTIS_REPO_WHITELIST='github.com/<org>/*'"

This is the error I'm receiving:

failed to access secret <bucket>/gh-webhook-secret: failed to decrypt dek: rpc error: code = PermissionDenied desc = Request had insufficient authentication scopes.

I've checked the permissions on the bucket and KMS, and this berglas command line worked fine on Cloud Run. I'm not running this on Cloud Run because sadly Atlantis needs persistence.

I'm aware that I could be running this on GKE (using your repo), but I'm looking for a solution with a smaller footprint that will let me run most, if not all terraform, from Atlantis itself, to solve a chicken & egg problem.

And this is how I'm using berglas on my Dockerfile:

FROM runatlantis/atlantis:latest

COPY --from=gcr.io/berglas/berglas:latest /bin/berglas /bin/berglas

ENTRYPOINT exec /bin/berglas exec --local -- /usr/local/bin/docker-entrypoint.sh server

Thanks!

sethvargo commented 5 years ago

Hi @caquino

Thanks for sending that information over. As the error implies, the "Request had insufficient authentication scopes."

There are two "levels" of auth, oauth scopes and service account permissions. You provided the following scopes to the VM:

None of those scopes provide access to the KMS API. You need to add the following scope to the list:

Since GCP already has fine-grained IAM permissions and you are using a dedicated service account, you may want to drop all scopes and use the generic cloud-platform scope instead. Either way, adding the cloudkms scope will solve this issue. Thanks and let me know if you have any questions!