jahirraihan22 / DEVOPS_ROADMAP

0 stars 1 forks source link

K8s 101 #20

Closed jahirraihan22 closed 1 year ago

jahirraihan22 commented 1 year ago

K8s 101 (CKA prep)

jahirraihan22 commented 1 year ago

All basic component terms

K8s Terms

  1. Kubernetes: An open-source container orchestration system for automating deployment, scaling, and management of containerized applications.
  2. Node: A physical or virtual machine that runs Kubernetes and hosts one or more pods.
  3. Pod: The smallest deployable unit in Kubernetes. It represents a single instance of a running process in a cluster.
  4. Service: An abstraction layer that defines a logical set of pods and a policy to access them.
  5. Deployment: A Kubernetes resource that manages the rollout and updates of a set of replica pods.
  6. ReplicaSet: A Kubernetes resource that ensures a specified number of pod replicas are running at any given time.
  7. Namespace: A way to create virtual clusters within a physical Kubernetes cluster. It provides a scope for naming resources and isolates them from each other.
  8. Container: A lightweight, standalone executable package that contains everything needed to run an application, including code, libraries, and dependencies.
  9. Image: A file that contains a snapshot of a container, including its code, libraries, and dependencies.
  10. Volume: A directory that contains data accessible to containers in a pod. It can be used to persist data beyond the lifetime of a container.
jahirraihan22 commented 1 year ago

ETCD

Image

ETCD in K8s

Image

jahirraihan22 commented 1 year ago

kube apiserver

Image

jahirraihan22 commented 1 year ago

kube contorller manager

Image

cat /etc/kubernetes/manifests/kube-controller-manager.yml
jahirraihan22 commented 1 year ago

Scheduler

Scheduler decides which pods go which node based on different criteria like

jahirraihan22 commented 1 year ago

Kubelet

Image

jahirraihan22 commented 1 year ago

Kube proxy

Kube-proxy is a network proxy and load balancer that runs on each node in a Kubernetes cluster. Its main function is to manage network communications between Kubernetes services and their associated pods.

When a Kubernetes service is created, kube-proxy creates rules in the host's iptables or IPVS firewall to forward traffic to the appropriate pods. It also monitors the health of the pods and updates the rules as needed to ensure that traffic is always directed to healthy pods.

Kube-proxy supports three different proxy modes:

  1. Userspace mode: In this mode, kube-proxy opens a port on the host machine and listens for incoming connections. When a connection is received, it proxies the traffic to the appropriate pod.

  2. iptables mode: In this mode, kube-proxy creates iptables rules on the host machine to forward traffic to the appropriate pod. This is the default mode used in Kubernetes.

  3. IPVS mode: In this mode, kube-proxy uses the IPVS (IP Virtual Server) kernel module to load balance traffic between the pods. This mode is recommended for clusters with a large number of services and endpoints.

Overall, kube-proxy plays a critical role in enabling Kubernetes services to communicate with their associated pods and ensuring that traffic is properly load balanced across the cluster.

jahirraihan22 commented 1 year ago

POD

In Kubernetes, a pod is the smallest deploy-able unit that can be created and managed. A pod represents a single instance of a running process in a cluster, and it can contain one or more containers that share the same network namespace and storage volumes.

The main purpose of a pod is to provide a self-contained environment for running a single application or microservice. By grouping multiple containers within a pod, Kubernetes ensures that the containers are scheduled together on the same node, share the same network namespace, and can communicate with each other using local IP addresses and ports.

Pods are created and managed by Kubernetes controllers, such as deployments, replica sets, and stateful sets. When a controller creates a pod, it assigns it a unique IP address and a DNS name that can be used by other pods to access it.

Because pods are ephemeral by design, Kubernetes provides mechanisms for ensuring that pods are always running and available, even in the face of failures. For example, if a pod fails, Kubernetes can automatically restart it or create a new pod to replace it. Additionally, Kubernetes can perform rolling updates and rolling restarts to update the software running in a pod without disrupting its availability.

Overall, pods are a fundamental building block of Kubernetes applications, providing a lightweight, self-contained environment for running containerized workloads.

Image

jahirraihan22 commented 1 year ago

Scheduling

Image

---
apiVersion: v1
kind: Pod
metadata:
  name: bee
spec:
  containers:
  - image: nginx
    name: bee
  tolerations:
  - key: spray
    value: mortein
    effect: NoSchedule
    operator: Equal

Image

Affinity types

Image

jahirraihan22 commented 1 year ago

Daemon Sets

In Kubernetes, a DaemonSet is a type of controller that ensures that a copy of a particular pod is running on every node in the cluster. This makes DaemonSets ideal for running system-level services or daemons that need to run on every node in the cluster, such as log collectors, monitoring agents, and network proxies. When a new node is added to the cluster, or an existing node is removed, the DaemonSet controller automatically creates or deletes a pod on the node to ensure that the desired state is maintained. This allows the system-level service to run seamlessly across the cluster without the need for manual intervention. DaemonSets also support rolling updates, which allows you to update the system-level service running on each node in the cluster in a controlled and automated manner. This helps ensure that the service remains available and operational during the update process. Overall, DaemonSets provide a powerful mechanism for running system-level services in a Kubernetes cluster, ensuring that the services are running on every node and are always up-to-date.

jahirraihan22 commented 1 year ago

Application Life cycle

jahirraihan22 commented 1 year ago

0x6A61686972

jahirraihan22 commented 1 year ago

ConfigMaps

kubectl create configmap <config-name> --from-literal=<key>=<value>
kubectl create configmap <config-name> --from-file=<path-to-file>
kubectl create configmap webapp-config-map --from-literal=APP_COLOR=darkblue --from-literal=APP_OTHER=disregard
jahirraihan22 commented 1 year ago

Secretes

Image

kubectl create secret generic db-secret --from-literal=DB_Host=sql01 --from-literal=DB_User=root --from-literal=DB_Password=password123 

Securing

jahirraihan22 commented 1 year ago

A note on Secrets

Remember that secrets encode data in base64 format. Anyone with the base64 encoded secret can easily decode it. As such the secrets can be considered not very safe.

The concept of safety of the Secrets is a bit confusing in Kubernetes. The kubernetes documentation page and a lot of blogs out there refer to secrets as a “safer option” to store sensitive data. They are safer than storing in plain text as they reduce the risk of accidentally exposing passwords and other sensitive data. In my opinion it’s not the secret itself that is safe, it is the practices around it.

Secrets are not encrypted, so it is not safer in that sense. However, some best practices around using secrets make it safer. As in best practices like:

Not checking-in secret object definition files to source code repositories. Enabling Encryption at Rest for Secrets so they are stored encrypted in ETCD. Also the way kubernetes handles secrets. Such as:

A secret is only sent to a node if a pod on that node requires it.

Having said that, there are other better ways of handling sensitive data like passwords in Kubernetes, such as using tools like Helm Secrets, HashiCorp Vault.

jahirraihan22 commented 1 year ago

Multi Container Pods

Multi-container Pods Design Patterns

There are 3 common patterns, when it comes to designing multi-container PODs. The first and what we just saw with the logging service example is known as a side car pattern. The others are the adapter and the ambassador pattern.

But these fall under the CKAD curriculum and are not required for the CKA exam. So we will be discuss these in more detail in the CKAD course.

Image

jahirraihan22 commented 1 year ago

Init Containers

In a multi-container pod, each container is expected to run a process that stays alive as long as the POD’s life cycle. For example in the multi-container pod that we talked about earlier that has a web application and logging agent, both the containers are expected to stay alive at all times. The process running in the log agent container is expected to stay alive as long as the web application is running. If any of them fails, the POD restarts.

But at times you may want to run a process that runs to completion in a container. For example a process that pulls a code or binary from a repository that will be used by the main web application. That is a task that will be run only one time when the pod is first created. Or a process that waits for an external service or database to be up before the actual application starts. That’s where initContainers comes in.

An initContainer is configured in a pod like all other containers, except that it is specified inside a initContainers section, like this:


apiVersion: v1
kind: Pod
metadata:
  name: myapp-pod
  labels:
    app: myapp
spec:
  containers:
  - name: myapp-container
    image: busybox:1.28
    command: ['sh', '-c', 'echo The app is running! && sleep 3600']
  initContainers:
  - name: init-myservice
    image: busybox
    command: ['sh', '-c', 'git clone <some-repository-that-will-be-used-by-application> ;']

When a POD is first created the initContainer is run, and the process in the initContainer must run to a completion before the real container hosting the application starts.

You can configure multiple such initContainers as well, like how we did for multi-containers pod. In that case, each init container is run one at a time in sequential order.

If any of the initContainers fail to complete, Kubernetes restarts the Pod repeatedly until the Init Container succeeds.

apiVersion: v1
kind: Pod
metadata:
  name: myapp-pod
  labels:
    app: myapp
spec:
  containers:
  - name: myapp-container
    image: busybox:1.28
    command: ['sh', '-c', 'echo The app is running! && sleep 3600']
  initContainers:
  - name: init-myservice
    image: busybox:1.28
    command: ['sh', '-c', 'until nslookup myservice; do echo waiting for myservice; sleep 2; done;']
  - name: init-mydb
    image: busybox:1.28
    command: ['sh', '-c', 'until nslookup mydb; do echo waiting for mydb; sleep 2; done;']

Read more about initContainers here. And try out the upcoming practice test.

https://kubernetes.io/docs/concepts/workloads/pods/init-containers/

jahirraihan22 commented 1 year ago

Kubernetes-CKA-0400-Application-Lifecycle-Management-1.pdf

jahirraihan22 commented 1 year ago

Self Healing Applications

Kubernetes supports self-healing applications through ReplicaSets and Replication Controllers. The replication controller helps in ensuring that a POD is re-created automatically when the application within the POD crashes. It helps in ensuring enough replicas of the application are running at all times.

Kubernetes provides additional support to check the health of applications running within PODs and take necessary actions through Liveness and Readiness Probes. However these are not required for the CKA exam and as such they are not covered here. These are topics for the Certified Kubernetes Application Developers (CKAD) exam and are covered in the CKAD course.

jahirraihan22 commented 1 year ago

Cluster Management

jahirraihan22 commented 1 year ago

TASK

Upgrade the controlplane components to exact version v1.26.0

Upgrade the kubeadm tool (if not already), then the controlplane components, and finally the kubelet. Practice referring to the Kubernetes documentation page.

Note: While upgrading kubelet, if you hit dependency issues while running the apt-get upgrade kubelet command, use the apt install kubelet=1.26.0-00 command instead.



Solution

On the controlplane node, run the following commands:

This will update the package lists from the software repository.

apt update

This will install the kubeadm version 1.26.0

apt-get install kubeadm=1.26.0-00

This will upgrade Kubernetes controlplane node.

kubeadm upgrade apply v1.26.0

Note that the above steps can take a few minutes to complete.

This will update the kubelet with the version 1.26.0.

apt-get install kubelet=1.26.0-00 

You may need to reload the daemon and restart kubelet service after it has been upgraded.

systemctl daemon-reload
systemctl restart kubelet
jahirraihan22 commented 1 year ago

https://kodekloud.com/topic/practice-test-cluster-upgrade-process/