Closed EppO closed 7 months ago
Remove docker requirements
There is still some hardcoded docker commands in the code (network plugins, etcd, node role, ...). One of kubespray's goals is to "Deploy a Production Ready Kubernetes Cluster", so it should NOT have a container engine capable of building new container image by default, for security purposes. Containerd would be a more secure default setting. In order to make that transition, we need to use
crictl
wheredocker
is used today.
I'm all in for that, a PR was raised a long time ago to set containerd as the default runtime (but was drop as too much work and too much breaking change), but that would allow us to get rid of a lot of docker default commands and at the same time move toward something more CRI oriented.
RELEASE.md says:
Kubespray doesn't follow semver. [...] Breaking changes, if any introduced by changed defaults or non-contrib ansible roles' playbooks, shall be described in the release notes.
AFAIK we already did non-backwards compatible change in the v2.x of Kubespray (when moving to kubeadm for instance). The "production ready" party is a lot about providing a path for people to move from v2.X and v2.(X+1).
What I'm saying is that we can do breaking changes (like changing default container engine) as long as they are accepted by the community and well documented.
@EppO I thought non-kubeadm was removed in #3811 is there some other things that need clean-up? kubeadm is the only supported deployment method since v2.9.
For the GitLab CI rules:
and only:changes
, last I checked GitLab CI (via Failfast) is unaware of the target branch, and therefore doesn't know against what to compare, the fallback mechanism explained here is problematic for PRs with multiple commits.
Another area to consider, is that Prow has support for such features (see run_if_changed
in https://github.com/kubernetes/test-infra/blob/master/prow/jobs.md)
For conformance tests, there is sonobuoy_enabled: true
available and I think it's enabled on 2 CI jobs currently: config and output
@MarkusTeufelberger has some very valuable input on role design and molecule, raised a couple of issues around it. Examples: #4622 #3961
RELEASE.md says:
Kubespray doesn't follow semver. [...] Breaking changes, if any introduced by changed defaults or non-contrib ansible roles' playbooks, shall be described in the release notes.
AFAIK we already did non-backwards compatible change in the v2.x of Kubespray (when moving to kubeadm for instance). The "production ready" party is a lot about providing a path for people to move from v2.X and v2.(X+1).
Good to know. I was more worried about end-users that may not know this and end up breaking some production clusters while trying to upgrade, hence a 3.0 proposal that is more explicit on that kind of breaking changes.
@EppO I thought non-kubeadm was removed in #3811 is there some other things that need clean-up? kubeadm is the only supported deployment method since v2.9.
I missed it because I didn't change my inventory for a while and some deprecated options are still there. I think it would be beneficial for end-users as well to list deprecated inventory options for each releases. I guess I'm not the only one with some old settings :)
For the GitLab CI
rules:
andonly:changes
, last I checked GitLab CI (via Failfast) is unaware of the target branch, and therefore doesn't know against what to compare, the fallback mechanism explained here is problematic for PRs with multiple commits. Another area to consider, is that Prow has support for such features (seerun_if_changed
in https://github.com/kubernetes/test-infra/blob/master/prow/jobs.md)
I hear you. We can't use pipelines for merge requests because we don't create the merge request in GitLab, so that's a dead end. But I'm convinced we should architect the CI around better changes detection to get a quicker feedback loop, if prow is an option, we should look at it.
For conformance tests, there is
sonobuoy_enabled: true
available and I think it's enabled on 2 CI jobs currently: config and output
I guess we have some work to do in that area then :)
The maximum supported Kubernetes version is 1.16.99, but the server version is v1.18.5. Sonobuoy will continue but unexpected results may occur.
Ideally we should run conformance tests regularly to test various setup combinations and not wait release time to pass the full conformance tests. That's why I was suggesting to separate them from the install/upgrade use cases.
etcd_kubeadm_enabled: false
What about etcd? Should we change that default to true? It makes etcd upgrades impossible outside of kubernetes upgrades, kubeadm still doesn't support upgrading etcd without the kubernetes components AFAIK.
- Add CI job to test scale playbook
I also though about that, scale and remove needs some love from CI
Flip default of var kubeadm_control_plane to true and remove "experimental" from code?
etcd_kubeadm_enabled: true makes all etcdctl related use cases not to work. There is also no backup procedure with kubeadm managed etcd I started looking at it
Flip default of var kubeadm_control_plane to true and remove "experimental" from code?
That's actually what I was referring to with "Drop non-kubeadm deployment" but I mixed two different use cases: since 2.9 kubespray is always using kubeadm to provision the cluster but it doesn't use kubeadm join on the non-first control plane nodes by default (just another run of kubeadm init). I think the join model is the good way forward.
Personally I'd like to drop a few features that are relatively exotic or easy to work around/implement yourself such as downloading binaries and rsync'ing them around instead of just fetching them on each node. This could really simplify the download
role.
Another bigger architectural change could be to change kubespray into a collection (maybe even adding some roles to https://github.com/ansible-collections/community.kubernetes eventually and/or using them here?) and in general switching to Ansible 2.10.
Personally I'd like to drop a few features that are relatively exotic or easy to work around/implement yourself such as downloading binaries and rsync'ing them around instead of just fetching them on each node. This could really simplify the
download
role.
I'd prefer to rely on the distro package manager when applicable instead of downloading all the stuff but if you have a better design for the download role, feel free to submit a PR.
Another bigger architectural change could be to change kubespray into a collection (maybe even adding some roles to https://github.com/ansible-collections/community.kubernetes eventually and/or using them here?) and in general switching to Ansible 2.10.
Ansible 2.10 is not released yet and we need to be careful on what ansible version is available on each supported distros. Regarding the usage of kubespray, I know @Miouge1 wanted to promote the container image use case, where you build your own custom image with your inventory and custom playbooks. That makes definitely sense in a CI pipeline.
Reducing scope and configurability of Kubespray would be nice. List of features that could be removed:
The more I think about it, the more I'm convinced kubespray should only provision kubernetes clusters on top of kubeadm, so we should only support the following 2 use cases on the etcd front:
That means removing the etcd_deployment_type
mode kubespray supports today. We would still test the BYO etcd use case in the CI though.
The more I think about it, the more I'm convinced kubespray should only provision kubernetes clusters on top of kubeadm, so we should only support the following 2 use cases on the etcd front:
- BYO etcd (either by using etcdadm or other means, out of scope of kubespray)
- etcd managed by kubeadm
That means removing the
etcd_deployment_type
mode kubespray supports today. We would still test the BYO etcd use case in the CI though.
The same we could formulate in some kind of design statement how Kubespray embrace, use and extend kubeadm. Not workaround it
We need to address technical debt. Code-base is wide, some areas are old and not maintained. I'd like to take the opportunity for the next major release to lean to the maximum the code-base and make the CI more agile to get quicker feedback.
Helm 3.x was released since Kubespray 2.x. It no longer requires a tiller pod and is integrated into k8s rbac. I think it would be better for Kubespray to refocus on its core competency: deploying production Kubernetes. Can include the most widely used plugins (CNI\CSI) in this. But apps that have a decent helm chart should now be deployed using that. Helm vs Ansible for deploying apps to Kubernetes is a no-brainer. Thanks to its state, Helm is truly declarative, Ansible is not. For example, uninstall a helm release and your app is removed from k8s, undefine an addon in Kubespray (eg cert_manager_enabled=false), and it remains. Most helm charts are better maintained than the addons in this project. I get the desire for Kubespray to be a one-stop-shop, so could either replace the addons with simple readme guidance explaining how to install the former addons using helm, or if workable could install the helm client and version-pinned helm charts using Kubespray.
Would significantly simplify this project and the maintenance burden.
I think we are very close to be able to use kubeadm managed etcd as the default. What do you think about that?
Maybe we could deal with Helm apps in a separate github project ?
This project would only focus on :
Some attached CI would not require kubsepray deployment : only inventory plus any kubernetes should be enough. This would avoid people to rewrite their own helm addons playbooks and roles.
EDIT: first mentioned dashboard as helm chart, bad example this is plain yaml, I removed it. Btw we may think about setting the dashboard out of kubespray scope in favor to Helm :) EDIT 2 : after searching a bit it seems there is no helm chart for dashboard
Ansible 2.10 is not released yet and we need to be careful on what ansible version is available on each supported distros.
it's been released for a few months and it'd be wonderful if we could have kubespray as an ansible collection
I think it would be better for Kubespray to refocus on its core competency: deploying production Kubernetes.
I'm wondering if everyone has the same view on what is a "production Kubernetes". The way I see it, people expect to get the following (ordered by "minimum" to "maximum" expectations):
kubectl
or via Helm Charts. I perceive MetalLB a "bare-metal cloud provider".kube-system
namespace to be labeled. It feels more "natural" to me to have this labeling done in the lower layers.I am split on where to put the demarcation of "production Kubernetes". On one hand, it would be nice to have kubespray the one-stop-shop for all above. Maintainability can be sustained either by regularly rendering Helm Charts inside an Ansible files
folder or delegating to Helm directly. In the latter case, Helm could either be installed on the controlling machine, the Kubernetes master or launched as a Pod via kubectl
.
On the other hand, I do agree that kubespray needs to stay focused, reduce CI/CD pipeline response time and maintenance burden.
Is there a shared view within the kubespray community on what a "production Kubernetes" is?
Hopefully a "helper" question: What is a Kubernetes addon and should be managed by the Kubernetes control plane vs. what is an app on top of Kubernetes?
I would also add monitoring (Prometheus) and tracing (Jæger) to number 6 in your list by the way as well as some log viewing/analyzing stack (Loki or Kibana). Probably also some CD mechanism like flux (https://toolkit.fluxcd.io/) to not mess with deploying Kubernetes state via Ansible.
Another feature of "production" is likely updating/upgrading/adding/removing each of these components in a way that keeps the actual workload of the cluster as unaffected as possible. A lot of these things are a mix of programs that run in the cluster itself and stuff that wants to be installed on the host (often even without providing proper packages or repositories upstream, only statically compiled golang binaries). This might also need some design/solution on Kubespray side.
You could make the case to increase scope infinitely. Perhaps in the v3 timeframe, the goal should be support the status quo using helm3. Convert the existing addons to helm releases. In terms of code, yes install helm on the controller machine like @cristiklein suggests and then for each addon, either store a values.yaml, or create one at runtime from a template, and run "helm upgrade --install" in an ansible task to deploy it. We can't currently remove an addon @MarkusTeufelberger, so again I'd suggest this should be out-of-scope for v3 (although can document how to uninstall helm charts).
That in itself would be a massive reduction of complexity. The extra apps would become declarative: our content for each one would be little more than a values file. The helm chart maintainers would do the heavy lifting.
I think the scope discussion can be had at a later stage. In any case, it's irrelevent for companies like us who will continue to use helm, not Kubespray for anything that has a helm chart. For us, Kubespray is one stage in a pipeline, so we'd prefer if all the "extras", including installing the helm binary on the controller machine, were kept optional.
Anyone that has a bit of time, kubespray helm integration can start today ;)
@EppO my preferred way to consume Kubespray is to use its Docker image. It makes it easy to reproduce and manage dependencies. I simply mount the inventory with docker run -v $PWD/inventory:/inventory.ini
rather than baking it in the image. Wayyyy back in the days Kubespray had a python wrapper, I think that was interesting to hide a bit of the Ansible details from users. But on the other hand it looks like Kubespray is used by lots of Ansible enthusiasts.
@cristiklein the thing is different orgs or teams will have different requirements for each component (CNI, CRI, ingress, logs, ...). To take ingress as an example, you could use nginx or traefik or ambassador or all of them at once.
I think the approach taken so far, has been to give a starting point for each component and allow users to takeover when their needs go beyond the defaults. nginx_ingress: enabled
option in Kubespray is rarely used in production deployments. Usually you want more control over the management of your ingress (even if you use nginx).
Finally "production" means different things for different orgs.
My opinion is that Kubespray should focus on the core Kubernetes components, things that are used by most people, and drop the settings that drive complexity but are used only by a small portion of the community. If that those less popular settings are critical to some people, then they probably should get involved (either themselves or sponsor somebody to represent their interests).
@jseguillon yeah, there is definitely an opportunity for a "install all the addons in k8s" type project separate from Kubespray. Regardless of how you install Kubernetes (Kubespray, kops, AKS, EKS, GKE), you will want to install stuff afterwards (log management, monitoring, security hardening, operators, ...). Like @MarkusTeufelberger mentioned Flux is a popular option, but I've also seen simple shell script work well (think kubectl -f mydir
).
@MarkusTeufelberger I agree that Prometheus and Flux are nice addons. However, I feel that log and metrics viewing/analyzing (e.g., Loki, Kibana, Elasticsearch, OpenDistro, Thanos and Grafana) should be treated as applications, since they are often stateful, require careful / tedious maintenance, and need to be scaled carefully with the incoming log/metrics workload.
@Miouge1 @holmesb @champtar To steer discussions, I created an initial draft of a Helm-based addons deployment for kubespray. PTAL: https://github.com/kubernetes-sigs/kubespray/compare/master...cristiklein:helm-addons
The umbrella Chart could become a separate sub-project that is consumed by kubespray (via git submodule). Either way, the user is free to use only that part of kubespray. I think it achieves "batteries included but removable and feel free to choose between NiMH, Li-ion or AC adapter".
Let me know what you think.
Production ready to me (among others) means that things have been tested in CI. Therefore I have very hard to understand e.g. why multiple version k8s are supported by a specific version of kubespray. This in turn adds complexity in ansible roles. As said before, if you need an older version of k8s, use the corresponding release branch!
supporting 1 version behind allow to not waste too much time on backports
supporting 1 version behind allow to not waste too much time on backports
Supported without any testing?
I redo my own testing on fully up to date OS usually, CI often use old base image
I'm wondering if everyone has the same view on what is a "production Kubernetes". The way I see it, people expect to get the following (ordered by "minimum" to "maximum" expectations):
- "Post-kubeadm", basically just containerd, kubelet, etcd, apiserver and scheduler.
- CNIs.
- "Load-balancer" providers. Notice that Kubernetes is slowly moving towards out-of-tree cloud providers, so these can also be deployed from
kubectl
or via Helm Charts. I perceive MetalLB a "bare-metal cloud provider".- Storage providers, e.g., native, Ceph, Rook, local storage, etc.
- Scripts to deploy secure infrastructure with right firewall rules and right cloud provider roles for the things above.
- Log forwarders, e.g., fluentd.
- Ingress controllers, e.g., nginx, Ambassador, Traefic
- Cert-manager
- External DNS
- Security hardening: Falco. Note that the preferred way to deploy Falco is [directly on the host]
I only use kubespray as the tool for bootstrapping and maintaining my kubernetes cluster (so just the first two items) and then use helm charts to deploy other tools on the top of the bootstrapped cluster because I want to have control over the values.yaml
of each chart I install.
This is why I think the core of kubespray should be the process of setting up production grade kubernetes cluster and leave the application installation to helm and give user total control to the configuration of each helm chart installation. Because if kubespray wants to support production grade support for the charts it installs, it's technically couples itself with all the helm charts and this adds the maintenance overhead of each chart and the requirement of releasing new versions when updating a chart's values becomes necessary
Ansible 2.10 is not released yet and we need to be careful on what ansible version is available on each supported distros.
it's been released for a few months and it'd be wonderful if we could have kubespray as an ansible collection
playbooks in collections are in TBD
state and it seems that ansible is still struggling the the design of collections. So having kubespray totally as an ansible collection is not possible yet, but it's still possible to move some generic roles to new or existing collections
I have actually restarted to worked on a fork of Kubespray for my own production. I've focused much more on day-2 than the initial boostraping of the cluster. Nearly all installer have a oneliner the difficulty is after.
I have a separation of steps that are not all bundled in one play:
On the more regular basis I would do the last two actions(kubadm upgrade). I would upgrade etcd only if really needed. Upgrade the operating system packages is done without changing anything in the cluster.
About apps/ingress etc.
I'm using kustomize on a separate repo and it comes after the cluster is ready and up, their lifecycle aren't tied to the cluster-lfecycle management.
Since all the project are having support for helm charts now a days, we can move from our own implementation to helm chart based deployment using https://docs.ansible.com/ansible/latest/collections/community/kubernetes/helm_module.html
Then whole values.yml can exposed by something like -
chart-name(component_name).values
Does it support running on one of the control plane instead of the Ansible runner ? if yes then that could be a good starting point
yes it does. We can run helm installation from control plane nodes.
yes it does. We can run helm installation from control plane nodes.
I made a similar proposal, but it was reject due to divergent views on where Kubespray should head. Some people think Kubespray should be kubeadm+CNI, others want to see Kubespray set up a more feature-full cluster. What I am saying is that we should first agree on the vision, before getting excited on the technical implementation. :smile:
The different vision come from the maintenance cost. By using helm to deploy 'classic' add-ons the maintenance burden can be kept low and everyone can contribute a version bump.
I think kubespray proivde add-ons and provide a way to change the chart version so anyone can customise the deployment according to thier need.
I think it would be better for Kubespray to refocus on its core competency: deploying production Kubernetes.
I'm wondering if everyone has the same view on what is a "production Kubernetes". The way I see it, people expect to get the following (ordered by "minimum" to "maximum" expectations):
1. "Post-kubeadm", basically just containerd, kubelet, etcd, apiserver and scheduler. 2. CNIs. 3. "Load-balancer" providers. Notice that Kubernetes is slowly moving towards out-of-tree cloud providers, so these can also be deployed from `kubectl` or via Helm Charts. I perceive MetalLB a "bare-metal cloud provider".
I think Kubespray should handle 1-2 and probably 3 (MetalLB) - anything that has to be installed on the nodes and/or requires some infrastructure awareness (i.e. integration with Terraform). Otherwise for everything else which can be installed via charts/YAML, it is a different scope which can be done better using different tools.
For example with BKPR you can install a number of components (Elasticsearch, Fluentd, Kibana, Prometheus, AlertManager, Grafana, nginx ingress, cert manager, external DNS, etc.) in one well-tested and -packaged bundle, on top of a bare cluster to provide the essentials of production-readiness: https://github.com/bitnami/kube-prod-runtime#quickstart The main requirement for BKPR is to have a k8s cluster with LoadBalancer support (which Kubespray may provide with Metallb though I haven't tried that yet). I think that is an example which can illustrate where to draw the dividing line around Kubespray's role (build the infrastructure and cluster).
That being said Kubespray could still leverage external tools or provide values/documentation/recommendations for installing them.
In fact, for various reasons I prefer to deploy Helm charts using Ansible: https://docs.ansible.com/ansible/latest/collections/community/kubernetes/helm_module.html (E.g. following infrastructure-as-code, everything is defined in files in git instead of helm install commands that you manually copy and paste. Also the chart values can be templated using Ansible variables and inventory which is very flexible.) So that could be one way to incorporate external helm charts as addons in Kubespray.
See this proposal on integrating Helm chart installations to Kubespray: https://github.com/kubernetes-sigs/kubespray/issues/7741
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
/remove-lifecycle rotten
Hi, (aside from my private Rasperry Pi installations) we rely heavily on Kubespray for our internal private cloud installation, so I'd be happy to join in the Kubespray 3.0 discussions and also happy to support where I can.
The discussions in this issue seem to have fallen asleep a bit or at least were discussed further in another channel. So would appreciate an update sometime to hear the latest viewpoint on the Kubespray vision.
Apart from that:
In my case we choose Kubespray mainly for the installaition of Kubernetes components itself. What we already like to use is also the rollout of "core setup" like etcd, CNI and cloud controller manager. But ingress, logging and monitoring we do all on top with helm-charts, fluxcd and kustomization.
I believe that the biggest added value will come when Kubespray focuses more on Kubernetes maintenance and installation, where I also see CNI, etcd and cloud-controller manager. I personally like @ant31 input as well, as it feels like the Day 2 operation doesn't get as much attention as it should.
I personally also like to use the installation method mentioned by @Miouge1 and mount my inventory in a docker container. This way I have all my dependencies, I am more independent from my execution host and I feel I get a better traceability. Must confess though that we currently use a dedicated host for orchestrating our Kubespray clusters where the dependencies are managed.
Nevertheless, we should offer a generic solution for the integration of Helm charts, since we should not forget that there is more to a "productive cluster" than just Kubernetes components themselves. In my view, this would provide a good basis for supporting kubespray users.
I would recommend to increase the speed an decrease the number of steps. Sometime it felt that the scripts are repeating for instance i can see atleast 3-4 time etcd certificate generation steps.
It would be nice to have an option for partially upgrade cluster, like more separatable roles for upgrade pre tasks,upgrade masters, upgrade workers, post tasks. Because if you have a bigger cluster it takes time to upgrade all nodes and maybe the change window for upgrade is shorter so you want to upgrade only 20 worker and in a next window upgrade the other 20 worker. Maybe playbook run stops because of a network or other issue and you want to re-run from that worker. I know there is ansible limit, but you need to add at least first master too to limit list to be sure for facts collection which takes a lot of extra time. Also maybe would be easier to limit which dedicated nodes you want to upgrade. Because at the moment if you want to upgrade like 4 nodes simultaneously there is no option to make sure that don't try to upgrade all 4 from the same dedicated (tainted) node group (which group has only 5 nodes) at the same time.
As I understand it, roles
are about ensuring state, while playbooks
are dictating process. So if you want better processes to upgrade a cluster in stages etc. this should be done in a more complex playbook, not within roles imho.
In my experience roles (and tags) highly depend on each other, like can't do only a worker upgrade without also adding a master to the limit list. Too much facts, generated vars, created certs etc could be required for later roles. Not saying that every role need to be independent and of course there will be dependencies (few role already has meta dependency). Just I can't see through these dependencies to know how to create a playbooks that contains everything required without containing the whole upgrade playbook to make separatable steps so it could be faster running an upgrade-worker playbook instead of running again the total upgrade.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
Is this still an active discussion or really stale?
Is this still an active discussion or really stale?
Kind of stale at the moment, but this should be back on track somehow
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
What would you like to be added:
Kubeadm control plane mode
kubeadm join
is the recommended way for non-first control plane nodes and worker nodes. We should setkubeadm_control_plane
to true by default. Not sure if it makes sense to keep the legacy 'kubeadm init everywhere' use case around. Is there any edge cases with the control plane mode?Use etcdadm to manage external etcd cluster
There is a valid use case to have an "external" etcd cluster not managed by kubeadm, specially when etcd is not deployed on the control plane nodes. Currently, etcd setup is fairly manual, fragile (like during upgrades), and hard to debug. https://github.com/kubernetes-sigs/etcdadm is supposed to make etcd management easier. In the long run, kubeadm will eventually use etcdadm under the hood. It would be a good idea to implement it for the "external" etcd use case as well. Moreover, adding support for BYO etcd cluster (#6398) should be fairly easy if we go down that path.
Review CI matrix
Switch cgroup driver default to systemd
kubespray officially supports only systemd-based linux distros. We should not have two cgroup managers (see https://github.com/kubernetes/kubeadm/issues/1394#issuecomment-462878219 for technical details). This is a backward incompatible change, so maybe default it for new install but keep the current setting for the upgrades?
Remove docker requirements
There is still some hardcoded docker commands in the code (network plugins, etcd, node role, ...). One of kubespray's goals is to "Deploy a Production Ready Kubernetes Cluster", so it should NOT have a container engine capable of building new container image by default, for security purposes. Containerd would be a more secure default setting. In order to make that transition, we need to use
crictl
wheredocker
is used today.Why is this needed: We need to address technical debt. Code-base is wide, some areas are old and not maintained. I'd like to take the opportunity for the next major release to lean to the maximum the code-base and make the CI more agile to get quicker feedback.
/cc @floryut, @Miouge1, @mattymo, @LuckySB