Closed BanzaiMan closed 5 years ago
Looking at google-cloud-ruby docs, it appears that it employs a completely different mechanism to authenticate, namely, it uses a key file. It is not impossible, but probably hard and unreliable long-term, to reconstruct this file from bits of information that the user provides. So it is probably going to require a credential file (encrypted, for sure).
So I'm the user.. We can provide a key file no problem, we already have one encrypted in the project. So how do I attach said credential file to the deploy in the .travis.yml file?
@kedaly The current code is badly out of date, and it'll have to be rewritten to work with the new authentication scheme.
@BanzaiMan I've worked around it with docker containers, so no rush :)
+#!/usr/bin/env bash
+
+# Deploy files to GCE
+# usage: GCEDeploy.sh <bucket-name> <key-file> versions...
+
+#set -e -u -x
+
+function runDockerCopy {
+ #let's push to gcbuckets for latest
+ echo $1
+ docker run --rm -i --volumes-from gcloud-config \
+ google/cloud-sdk \
+ $1
+}
+
+function useage {
+ echo "usage: GCEDeploy.sh <bucket-name> <key-file> versions..."
+
+}
+SCRIPT_DIR=$( cd -P "$( dirname "${BASH_SOURCE[0]}" )" && pwd )
+
+
+#validate we have a bucket name
+if [ -z ${1+z} ];then
+ echo "You must specify a bucket"
+ useage
+ exit 1
+fi
+
+#validate we have a key file
+if [ -z ${2+z} ];then
+ echo "You must specify a Key file"
+ useage
+ exit 1
+fi
+
+BUCKET=$1
+KEYFILE=$2
+
+echo "Uploading to bucket $BUCKET with keyfile $KEYFILE"
+
+
+#check to see if we have a gcloud-config container running
+IS_CONTAINER_RUNNING="$(docker ps -a)"
+#echo $IS_CONTAINER_RUNNING
+if [[ $IS_CONTAINER_RUNNING == *"gcloud-config"* ]]; then
+ echo "Deleting old gcloud config"
+ docker rm gcloud-config
+fi
+
+#Authenticate with GGE
+docker run -i -v $KEYFILE:/gce-keyfile.key -v /tmp:/tmp --name gcloud-config google/cloud-sdk gcloud auth \
+activate-service-account --key-file /gce-keyfile.key --project <your project here>
+
+
+if (($? > 0));then
+ echo "GCLOUD Authorization failure"
+ exit 1
+fi
+
+#let's build the installer
+source $SCRIPT_DIR/buildinstaller.sh
+cp $SCRIPT_DIR/../etc/gce-install/installer.sh /tmp/installer.sh
+
+#read the input paramters
+FIRSTVER=""
+POSITION=0
+for VER in "$@"
+do
+ POSITION=$((POSITION+1))
+
+ #DEPLOY ONLY IF WE HAVE PASSED THE FIRST 2 PARAMETERS
+ if ((POSITION > 2));then
+ echo "Deploying $VER"
+ if [ ! -n "$FIRSTVER" ];then
+ FIRSTVER="$VER";
+ runDockerCopy "gsutil cp /tmp/install.sh gs://$BUCKET/install-$FIRSTVER.sh"
+ runDockerCopy "gsutil cp /tmp/install.sh.md5 gs://$BUCKET/install-$FIRSTVER.sh.md5"
+ runDockerCopy "gsutil cp /tmp/installer.sh gs://$BUCKET/installer-$FIRSTVER.sh"
+ else
+ runDockerCopy "gsutil cp gs://$BUCKET/install-$FIRSTVER.sh gs://$BUCKET/install-$VER.sh"
+ runDockerCopy "gsutil cp gs://$BUCKET/install-$FIRSTVER.sh.md5 gs://$BUCKET/install-$VER.sh.md5"
+ runDockerCopy "gsutil cp gs://$BUCKET/installer-$FIRSTVER.sh gs://$BUCKET/installer-$VER.sh"
+ fi
+ fi
Add a comment to this line
+done
+
+#destroy the docker login container
+docker rm gcloud-config
I've started working on the new google-cloud-storage
-based DPL code in the gcs-ng
branch. There are some significant API changes, however, so it will need some additional testing and user input.
Based on https://github.com/GoogleCloudPlatform/google-cloud-ruby#cloud-storage-ga, I believe that the credentials will have to be stored in a JSON file, not as an API key (as was the case before). This most likely will require encrypting the file. And, if you are not careful, the decrypted file may end up in the bucket.
Thanks for contributing to this issue. As it has been 90 days since the last activity, we are automatically closing the issue. This is often because the request was already solved in some way and it just wasn't updated or it's no longer applicable. If that's not the case, please do feel free to either reopen this issue or open a new one. We'll gladly take a look again! You can read more here: https://blog.travis-ci.com/2018-03-09-closing-old-issues
@BanzaiMan - you mentioned above a new branch/testing, is that ready to be used in a beta env?
@JamieSinn No, it is not. Sorry.
Let me know if/when it is. Until then we'll be unfortunately moving our CI/CD off of Travis :(
the Service Account credentials can be specified by providing the path to the JSON file, or the JSON itself, in environment variables.
By my experiments, "the JSON itself" does not work. What can work, is the data structure that is the result of JSON.load(json_data)
.
At any rate, I looked at the content of the JSON file, and it is rather complex (and long). Specifying it on the command line as a JSON string is going to be unrealistic, and the "environment variable" is just the path to a JSON file, so the JSON file remains our only reasonable option for authentication.
It's ready for testing.
deploy:
provider: gcs
edge:
branch: gcs-ng
project_id: YOUR_GCS_PROJECT_NAME
credentials: PATH_TO_YOUR_JSON_CREDS_FILE
bucket: BUCKET_NAME
local_dir: DIRECTORY
skip_cleanup: true
As I indicated before, it is recommended that the credentials file be encrypted:
$ travis encrypt-file gcs-credentials.json
If your repository already has another file encrypted, it is important to read about encrypting multiple files.
@JamieSinn Paging you, in case you are still interested.
Let me know what you need tested and I'll run it
Anything you have GCS for. The config for it is shown above.
Will be testing in semi-prod this week.
@BanzaiMan - trying this, I got this error:
/home/travis/.rvm/gems/ruby-2.4.1/gems/google-api-client-0.25.0/lib/google/apis/core/http_command.rb:228:in `check_status': forbidden: [SERVICE-ACCOUNT-NAME]@[REDACTED].iam.gserviceaccount.com does not have resourcemanager.projects.get access to project [REDACTED]. (Google::Apis::ClientError)
@aviadatsnyk The error message suggests your credentials are wrong.
thanks @BanzaiMan. This is a service account with permission to create obejcts in my target bucket. Do you have a definition of the needed privileges, or a way for me to find these?
The current gstore
-backed provider is still functional, so replacing it with this new implementation will be problematic.
The current
gstore
-backed provider is still functional, so replacing it with this new implementation will be problematic.
What do you mean?
@aviadatsnyk My comment was not in response to your question.
I meant that the current gcs
provider as documented in https://docs.travis-ci.com/user/deployment/gcs/ is reported to function still. So replacing it with this is going to break every existing deployment. Therefore, we will need either a deprecation strategy, and/or a new name for this provider.
@aviadatsnyk As for the privileges, my understanding is that the service account needs "Storage Admin" role assigned in order to find the bucket. https://github.com/travis-ci/dpl/pull/916/files#diff-04c6e90faac2675aa89e2176d2eec7d8R604
After some deliberation, I think we will:
gcs
as is, but add a deprecation warninggooglecloudstorage
provider using Google's API clientI'll add a comment when this happens in the PR.
Thanks for your work on this!
One thing I noticed is a huge time difference between linux and osx during this step:
rvm $(travis_internal_ruby) --fuzzy do ruby -S gem install $TRAVIS_BUILD_DIR/dpl-*.gem --pre
It seems to take ~4s on osx, but ~230s on linux. Linux also prints out
invalid options: -SHN
(invalid options are ignored)
but don't know if it's related in any way to the runtime difference.
Oh, and as an addendum, trying to use this improved provider on Windows hangs so long on
ruby -S gem install $TRAVIS_BUILD_DIR/dpl-*.gem --pre
That Travis kills the build due to receiving no output for 10m.
For now I have a fairly trivial workaround to this issue by just manually installing the GCS gem
travis_wait gem install google-cloud-storage --no-rdoc --no-ri
then using the the script
deployment provider that just invokes a ruby script to upload instead which doesn't hit any timeout issues on Windows.
require "google/cloud/storage"
storage = Google::Cloud::Storage.new(
project_id: "...",
credentials: "blah.json"
)
bucket = storage.bucket "bucket-name"
bucket.create_file "deploy/#{ENV['TARGET_OS_NAME']}/#{ENV['TRAVIS_COMMIT']}.tar.gz",
"#{ENV['TARGET_OS_NAME']}/#{ENV['TRAVIS_COMMIT']}.tar.gz"
Thanks for contributing to this issue. As it has been 90 days since the last activity, we are automatically closing the issue. This is often because the request was already solved in some way and it just wasn't updated or it's no longer applicable. If that's not the case, please do feel free to either reopen this issue or open a new one. We'll gladly take a look again! You can read more here: https://blog.travis-ci.com/2018-03-09-closing-old-issues
What is the current protocol for deploying to GCS? Do I need a credential file? My application says its being deployed but is not in GCS, and I would like to be able to narrow down why.
I'm going to close this issue because master
(dpl v2) has an implementation that now uses the the google-cloud-sdk Python CLI, addressing the original issue about logging activity.
This can be tested using:
deploy:
provider: gcs
edge:
branch: master
⋮
If any of the other issues mentioned on this ticket persist then please open new tickets for those specific issues.
gcs provider does not log any meaningful activity. What file is being uploaded, what errors, if any, are encountered while uploading.
See, for example, https://travis-ci.org/cotsog/travis_ci_prod_regular/builds/356889170 (which uses bogus credentials to upload to a nonexistent bucket).
Furthermore, it uses a deprecated gem, so it is best to rewrite it with https://github.com/GoogleCloudPlatform/google-cloud-ruby.