kbase / relation_engine

Repository containing the KBase Relation Engine API, specs, and other related files.
MIT License
3 stars 11 forks source link

Modify triggers and remove dockerdeploy in github workflow file #69

Open jayrbolton opened 3 years ago

jayrbolton commented 3 years ago

Most of our PRs are not updates to the API, so I don't think we need to automatically run this docker deploy code on every push to develop.

I like to explicitly run the scripts/docker_deploy command to actually update the image on docker hub when needed. I often find the control this provides to be very useful.

Also, if we run on the pull_request_target event, then people will be able to use their secret tokens from their forks. We were also running the workflow twice on any PR, as it was triggering on both push and PR. So I made it trigger on push only for develop and master, same as what we're doing in index_runner.

ialarmedalien commented 3 years ago

I disagree on dockerhub pushes -- whenever I use the docker image, I want the most recent version with the most up-to-date specs. I don't want to have to pull the latest specs from GitHub -- if I'm doing that, I may as well just build the image locally and not bother with the dockerhub version.

Having manual deploys means that there is no guarantee that the version on Dockerhub corresponds with the code at that version in the repo. This opens the door to accidental or deliberate sabotage of the image because it's no longer tied to the code in the repository.

It's also annoying to have to ask someone with the appropriate permissions to do the push to dockerhub, rather than having it happen automatically when the specs or API change.

jayrbolton commented 3 years ago

As the person who does most of the updating to docker hub and Rancher, I have not yet found the github action useful and prefer running the script, which is a similar workflow to publishing a package in other systems. At a later point, if we find that we want it in the CI, then we can add this workflow back in (which is why I kept in comments). I would prefer us not having to remember to add an extra keyword in the commit message for the majority of PRs. It is also less code to maintain, as that action does have some complexity.

ialarmedalien commented 3 years ago

Rather than commenting out the workflow, a better approach would be change the logic so that you have to include the keyword if you want the build automatically deployed to dockerhub.

Question: are you creating versioned docker images corresponding to each API version? I think if you're just interested in the API, it makes sense to have tagged versions released whenever the API is updated. However, if you also want the latest specs, it is more useful to have the bleeding edge docker image. In my use case, I want the whole bundle (specifically I want to use docker-compose to create an integrated setup of Arango + API with the latest specs).

I think we need to add some sort of versioning info to the specs (we have discussed it enough times!), so that both API and specs can be identified as whatever version.

jayrbolton commented 3 years ago

In the actual environment deploys, we've been keeping separate semvers for the API vs the specs. In this case it's useful to have a separate version for the API, because it determines whether we need to do a service upgrade in Rancher (we don't need an upgrade on spec updates). It's a requirement of Shane's to not have to do a service upgrade in order to do a spec upgrade.

Another perspective is that specs are really part of the interface of the API as far as the user is concerned, so that should be one semver.

This is a pretty unusual system we have. For your use case, one answer would be to include both the API and spec versions in each image tag, such as "0.0.5-0.0.19". The Rancher deploys could only pay attention to the first semver, which is the API (all specs are pulled from the release). Tests could lock to either versions

As for the workflow, a compromise that would actually work well for me is to only enable the dockerhub deploy workflow when pushing to master (which might be better called "deploy"). That way we keep some automatic-ness of it, and I get a more controllable deploy process.

jayrbolton commented 3 years ago

On further thought, I think it would make most sense for the docker image to only be versioned for the API itself and not be responsible for caching specs. Tests that use the API image can set SPEC_RELEASE_URL. Other codebases that pull in the specs can either use the release or repo URLs.

ialarmedalien commented 3 years ago

As for the workflow, a compromise that would actually work well for me is to only enable the dockerhub deploy workflow when pushing to master (which might be better called "deploy"). That way we keep some automatic-ness of it, and I get a more controllable deploy process.

This sounds good (including the branch renaming).

Related Q: is there currently any automation for creating spec releases and what's the criteria for making a new release? Is it on every new commit to develop?

jayrbolton commented 3 years ago

The criteria for a spec release would be any modification to /spec. Right now a release can happen manually through the Github UI, or will happen on a git tag push, or could happen in the future with a Github action.

Using a git tag might be confusing as the semver of the spec will be different from the API.