Closed tomdavidson closed 6 years ago
so this is an on-going conversation in the K8s community.
We have discussed a plugin with kops and helm ... a post install plugin that would allow for helm installs.
@chrislovecnm thanks for the reply. I don't think kops need to do my helm installs for me. That can be a post kops job task. Im looking at kops as having a theory of "only" the k8s cluster and performing its scope perfectly.
what are you using for CI
Im a fan of GitLab CI / Deploy, but I think we can be k8s & kops specific but ci/cd tool generic. We also use CircleCI, Travis, CodeShip, and last week a team started using AWS CodeDeploy.
What are your reqs
As with microservices architecture (MSA), Kubernetes cluster has many moving parts: masters, nodes, schedulers, pods, Replica Sets, services, labels, selectors, proxies, kubelets, cAdvisor monitoring containers and so on. Just like MSA the separation of concerns is a key design principle, but one that comes with the complexity cost - especially compared to ECS and Swarm. Essentially, I want to treat the k8s cluster simular any other MSA app product to reap the confidence and agility that comes with continuous delivery.
Im a newb to k8s but it seems that kopts does leaps and bounds to mitigate some of the complexity costs and I am interested in using kops my pipelines but am open to other options - much less open to CloudFront based ones. K8s cluster will be one component a monorepo that also includes several Helm installs and a particular VPC config via Terraform. I intended the discussion to focus on the kops context but am open to feedback everywhere.
I typically use four pipelines:
Each pipeline has:
I would like to start a change in a git branch. Change the kopts config, maybe instance type, coreos channel etc:
The change is committed to the master/integration branch which ends up deployed in the long lived/stage env where other staged apps are deployed.
Successful integration pipeline results in a release candidate that is deployed to a canary env and finally to the production:
/area addon-manager
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
Prevent issues from auto-closing with an /lifecycle frozen
comment.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or @fejta
.
/lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten
.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle rotten /remove-lifecycle stale
I want to deliver our k8s clusters with a pipeline. Each cluster will start by forking the starter repo. Pre kops jobs will include a Terraform plan for our VPC config. Post kops jobs will install some k8s add-ons and Helm Charts.
Can anyone share experiences with commit testing, acceptance testing, vendoring and upgrading in a pipeline context?