Closed cirocosta closed 5 years ago
after talking to @cirocosta this morning, the steps that we should take are as follows:
@scottietremendous wdyt?
Extra things to think about: (not for hush-house GA but might be useful)
terraform apply
whenever the terraform scripts are updated.terraform
scripts.@YoussB
I tend to think we should just go with the repo for incident reporting since that's what we basically do now with Wings.
On the second note, I think we should try out as many new features that helm can provide us as possible. I think it's important we still use this as a place to experiment.
Decide between having the workloads on top of GKE or PKS at the moment, deploying hush-house in PKS wouldn't give the team much more data than we'd get from GKE, where we already have it running, allowing us to not have to learn any details of PKS and just move directly to what we already have.
Apologies for acting as some random dude interjecting my opinion here, but I'd gently suggest reconsidering.
Thoughts:
web/worker.nodeSelector
and loadBalancerIP
. It seems like there's not just an opportunity to experiment with Concourse's stability on kubernetes, but the operationalization of it's deployment3.1. Dogfooding the "environment promotion" workflow is beneficial because it helps drive out optimizations for declarative configuration that can be statically defined in a helm values.yml
and not require an operator to go through "upgrade steps". The more things are operationalized such that configuration param keys:
& values can be set and not require an operator to perform "in-between" steps, the better.
thanks for listening to my dissertation 👍
Hey @aegershman,
thanks for the thoughtful comment. It makes a lot of sense 👍.
The point of keeping the environment running against GKE now is that we have already run some tests against it and has been hardened enough. We are considering PKS as a very important use case for our helm chart, but we wanted to run some tests against it first, and also document the steps needed for using with PKS for, as you said, other pivots can parrot it.
If nothing else, it would be beneficial to remove as much IaaS-specific implementation as possible. The current deployment appears to use IaaS/GKE-specific implementation details for the web/worker.nodeSelector and loadBalancerIP. It seems like there's not just an opportunity to experiment with Concourse's stability on kubernetes, but the operationalization of it's deployment
^^ these are hush-house deployment specific params, we have already created, and still creating, tests that run concourse against different deployment params for instance: https://github.com/concourse/concourse/blob/master/topgun/k8s/baggageclaim_drivers_test.go#L66-L94 Please feal free to give us feedback around other stbility tests that might be helpful in our case.
for the third point, there are currently 2 way to run an experimental concourse environment using k8s:
I am not sure I got this point correctly, please tell me if it makes sense.
Hey @YoussB ,
In case we end up going with using namespaced secrets as a way of leveraging k8s cred mgmt, we'd need this one tackled first https://github.com/concourse/docs/issues/96 so that we can give a reference for the teams who end up consuming it.
Thanks!
makes sense :+1:
Hey,
Below is a list of items that we will need to complete/confirm before we can start moving/adding workloads onto hush house:
hush-house
in PKS wouldn't give the team much more data than we'd get from GKE, where we already have it running, allowing us to not have to learn any details of PKS and just move directly to what we already have.Thanks!
cc @scottietremendous