As an organisation we really are looking to kill every service account key and use purely OIDC between GCP (in our case) and GHA.
There is just one thing that requires a service account key - authenticating service containers (unless , hopefully, you can tell me differently :) )
In this specific case we have a test container (elastic-search) in a private registry (has business data) populated with test data which we want to use in our various test workflows in different repos.
So the most efficient way to do this would be to fire it up as a service container in the appropriate jobs.
E.g.
name: Tests
on: push
permissions:
contents: 'read'
id-token: 'write'
jobs:
run-some-tests:
runs-on: ubuntu-latest
services:
elastic:
image: region-docker.pkg.dev/our-project/some-private-registry/test-data-image:X.Y.Z
env:
xpack.security.enabled: false
discovery.type: single-node
ports:
- 9200:9200
- 9300:9300
steps:
- name: Checkout the code
uses: actions/checkout@v3
- name: Tests
run: go test ./... or whatever
According to the service container documentation - we essentially only have possibility to provide a username/password which in normal circumstances we would get via the google-auth-action to provide an access token.
I cant really see a way currently to feed back credentials from a step that generated such credentials because the service container starts before it enters the job right?
We can of course solve the problem in a more verbose way I guess, but it will be a longer job and is skipping all the nice functionality of service containers.
To do it the verbose way without service containers would be:
name: Test Data
on: push
jobs:
run-some-tests:
runs-on: ubuntu-latest
steps:
- name: Checkout the code
uses: actions/checkout@v3
- id: auth
name: 'Authenticate to Google Cloud'
uses: 'google-github-actions/auth@v0'
with:
token_format: access_token
workload_identity_provider: projects/1234567890/locations/global/workloadIdentityPools/github/providers/github
service_account: some-account@some-project.iam.gserviceaccount.com
- uses: docker/login-action@v1
with:
registry: europe-west3-docker.pkg.dev
username: oauth2accesstoken
password: ${{ steps.auth.outputs.access_token }}
- name: Start elastic
run: |
docker run -d -p 9200:9200 \
-e xpack.security.enabled=false \
-e discovery.type=single-node \
region-docker.pkg.dev/our-project/some-private-registry/test-data-image:X.Y.Z
- name: wait for elastic ready
run: some while loop or whatever
- name: Tests
run: go test ./... or whatever
As an organisation we really are looking to kill every service account key and use purely OIDC between GCP (in our case) and GHA.
There is just one thing that requires a service account key - authenticating service containers (unless , hopefully, you can tell me differently :) )
In this specific case we have a test container (elastic-search) in a private registry (has business data) populated with test data which we want to use in our various test workflows in different repos.
So the most efficient way to do this would be to fire it up as a service container in the appropriate jobs.
E.g.
According to the service container documentation - we essentially only have possibility to provide a username/password which in normal circumstances we would get via the google-auth-action to provide an access token.
I cant really see a way currently to feed back credentials from a step that generated such credentials because the service container starts before it enters the job right?
We can of course solve the problem in a more verbose way I guess, but it will be a longer job and is skipping all the nice functionality of service containers.
To do it the verbose way without service containers would be: