The PR includes cloudbuild files to build and push it via a manual trigger (example) and also includes a shell script that should be called from the autopush build (cc @hqpho)
The Dockerfile does a multi-stage build for better build times and smaller final images.
We should adopt this pattern for the services build as well.
Verified by building and running both locally and in the cloud.
Here's one I setup as a cloud run job for the dc-dev data
To run it locally, one can use the following command similar to the services docker (the same env.list file will work for both services and data dockers):
docker run \
--env-file env.list \
# For GCS data.
-e GOOGLE_APPLICATION_CREDENTIALS=/gcp/creds.json \
-v $HOME/.config/gcloud/application_default_credentials.json:/gcp/creds.json:ro \
# For local data.
-v /path/to/data:/path/to/data \
gcr.io/datcom-ci/datacommons-data:latest
@kmoscoe - we can discuss documentation for this when we meet next.