solomem / DevOps

0 stars 0 forks source link

Dockerfile Reference and EKS #13

Open solomem opened 1 year ago

solomem commented 1 year ago

official docs >> https://docs.docker.com/engine/reference/builder/

parser directives

FROM microsoft/nanoserver COPY testfile.txt c:\ RUN dir c:\


# list of instructions:

ADD COPY ENV EXPOSE FROM LABEL STOPSIGNAL USER VOLUME WORKDIR ONBUILD (when combined with one of the supported instructions above)



# `.dockerignore` file
solomem commented 1 year ago

AWS lambda dockerfile example:

# Define global args
ARG FUNCTION_DIR="/home/app/"
ARG RUNTIME_VERSION="3.9"
ARG DISTRO_VERSION="3.12"

# Stage 1 - bundle base image + runtime
# Grab a fresh copy of the image and install GCC
FROM python:${RUNTIME_VERSION}-alpine${DISTRO_VERSION} AS python-alpine
# Install GCC (Alpine uses musl but we compile and link dependencies with GCC)
RUN apk add --no-cache \
    libstdc++

# Stage 2 - build function and dependencies
FROM python-alpine AS build-image
# Install aws-lambda-cpp build dependencies
RUN apk add --no-cache \
    build-base \
    libtool \
    autoconf \
    automake \
    libexecinfo-dev \
    make \
    cmake \
    libcurl
# Include global args in this stage of the build
ARG FUNCTION_DIR
ARG RUNTIME_VERSION
# Create function directory
RUN mkdir -p ${FUNCTION_DIR}
# Copy handler function
COPY app/* ${FUNCTION_DIR}
# Optional – Install the function's dependencies
# RUN python${RUNTIME_VERSION} -m pip install -r requirements.txt --target ${FUNCTION_DIR}
# Install Lambda Runtime Interface Client for Python
RUN python${RUNTIME_VERSION} -m pip install awslambdaric --target ${FUNCTION_DIR}

# Stage 3 - final runtime image
# Grab a fresh copy of the Python image
FROM python-alpine
# Include global arg in this stage of the build
ARG FUNCTION_DIR
# Set working directory to function root directory
WORKDIR ${FUNCTION_DIR}
# Copy in the built dependencies
COPY --from=build-image ${FUNCTION_DIR} ${FUNCTION_DIR}
# (Optional) Add Lambda Runtime Interface Emulator and use a script in the ENTRYPOINT for simpler local runs
ADD https://github.com/aws/aws-lambda-runtime-interface-emulator/releases/latest/download/aws-lambda-rie /usr/bin/aws-lambda-rie
COPY entry.sh /
RUN chmod 755 /usr/bin/aws-lambda-rie /entry.sh
ENTRYPOINT [ "/entry.sh" ]
CMD [ "app.handler" ]
solomem commented 1 year ago
# FROM public.ecr.aws/lambda/python:3.9
FROM dockerhub.artifactory.riotinto.com/python:3.9
LABEL author="ke.shi@riotinto.com"

ARG PIP_TIMEOUT=90
ARG pip_index_url=https://artifactory.riotinto.org/artifactory/api/pypi/rio-pypi/simple
ARG pip_extra_index_url=https://pypi.org/simple

ENV PIP_INDEX_URL=$pip_index_url
ENV PIP_EXTRA_INDEX_URL=$pip_extra_index_url

# Setup linux dependencies for pyodbc
RUN yum update -y \
    && yum install -y gcc curl libkrb5-dev \
    && yum clean all

COPY RioCertChain.crt /etc/ssl/certs/.

# Create respective dir's and copy souce code
RUN mkdir ${LAMBDA_TASK_ROOT}/{packages,src, CCG_exe_linux}
ENV PYTHONPATH ${LAMBDA_TASK_ROOT}/packages
COPY * ${LAMBDA_TASK_ROOT}/
COPY src/* ${LAMBDA_TASK_ROOT}/src/
COPY CCG_exe_linux/* ${LAMBDA_TASK_ROOT}/CCG_exe_linux/

# install required packages and delete files/folders that are not required to save space
RUN pip -v \ 
    --trusted-host pypi.org \
    --trusted-host files.pythonhosted.org \
    --trusted-host pypi.python.org \
    # --trusted-host ode-artifactory.com \
    --trusted-host artifactory.riotinto.org \
    install --no-cache-dir --upgrade pip setuptools --target "${PYTHONPATH}" --timeout $PIP_TIMEOUT \
    install -r requirements.txt --target "${PYTHONPATH}" --timeout $PIP_TIMEOUT \
    && find $PYTHONPATH -name "*.dist-info" -or -name "__pycache__" -exec rm -rv {} + \
    && rm -rf /var/cache/yum

CMD ["amt_main_kriging.handler"]
solomem commented 1 year ago

Docker Cmds

Build

# Note: if you are running on Windows you may need to fix line-endings using:
# --config core.autocrlf=input
$ git clone https://github.com/kubernetes-up-and-running/kuard
$ cd kuard
$ docker build -t kuard .
$ docker run --rm -p 8080:8080 kuard

Tag

docker tag kuard gcr.io/kuar-demo/kuard-amd64:blue

Push

When we pushed the image to GCR, it was marked as public, so it will be available everywhere without authentication.

docker push gcr.io/kuar-demo/kuard-amd64:blue

The Container Runtime Interface

Running Containers with Docker

docker run -d --name kuard \
  --publish 8080:8080 \
  gcr.io/kuar-demo/kuard-amd64:blue

Exploring the kuard Application

kuard exposes a simple web interface, which you can load by pointing your browser at http://localhost:3000 or via the command line:

curl http://localhost:8080

Limiting Resource Usage

One of the key benefits to running applications within a container is the ability to restrict resource utilization. This allows multiple applications to coexist on the same hardware and ensures fair usage.

To limit kuard to 200 MB of memory and 1 GB of swap space, use the --memory and --memory-swap flags with the docker run command. Stop and remove the current kuard container:

$ docker stop kuard
$ docker rm kuard

Then start another kuard container using the appropriate flags to limit memory usage:

$ docker run -d --name kuard \
  --publish 8080:8080 \
  --memory 200m \
  --memory-swap 1G \
  gcr.io/kuar-demo/kuard-amd64:blue

Limiting CPU resources

Another critical resource on a machine is the CPU. Restrict CPU utilization using the --cpu-shares flag with the docker run command:

$ docker run -d --name kuard \
  --publish 8080:8080 \
  --memory 200m \
  --memory-swap 1G \
  --cpu-shares 1024 \
  gcr.io/kuar-demo/kuard-amd64:blue

Removing images

docker rmi <image-id>

or 
docker rmi --force <image-id> 

garbage collector:

docker system prune image

solomem commented 1 year ago

Create an EKS cluster

image

Delete an EKS cluster

aws eks list-clusters, and this returns:
{
    "clusters": [
        "ridiculous-sheepdog-1688774172"
    ]
}

aws eks list-clusters | jq -r '.clusters[0]'

eksctl delete cluster --name="ridiculous-sheepdog-1688774172"

or:

eksctl delete cluster --name=$(aws eks list-clusters | jq -r '.clusters[0]')

https://eksctl.io

The configuration file will be saved: ~/.kube/config

image

Checking Cluster Status

(base) penpen@192-168-1-124 eks % kubectl get componentstatuses
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME                 STATUS    MESSAGE                         ERROR
etcd-0               Healthy   {"health":"true","reason":""}   
etcd-2               Healthy   {"health":"true","reason":""}   
scheduler            Healthy   ok                              
etcd-1               Healthy   {"health":"true","reason":""}   
controller-manager   Healthy   ok 

image

Listing Kubernetes Nodes

(base) penpen@192-168-1-124 eks % kubectl get nodes
NAME                                          STATUS   ROLES    AGE   VERSION
ip-192-168-17-14.us-west-2.compute.internal   Ready    <none>   11m   v1.25.9-eks-0a21954
ip-192-168-47-80.us-west-2.compute.internal   Ready    <none>   11m   v1.25.9-eks-0a21954

image

Describe nodes

(base) penpen@192-168-1-124 eks % kubectl describe nodes ip-192-168-17-14.us-west-2.compute.internal 
Name:               ip-192-168-17-14.us-west-2.compute.internal
Roles:              <none>
Labels:             alpha.eksctl.io/cluster-name=attractive-unicorn-1688726313
                    alpha.eksctl.io/nodegroup-name=ng-2f0beeb8
                    beta.kubernetes.io/arch=amd64
                    beta.kubernetes.io/instance-type=m5.large
                    beta.kubernetes.io/os=linux
                    eks.amazonaws.com/capacityType=ON_DEMAND
                    eks.amazonaws.com/nodegroup=ng-2f0beeb8
                    eks.amazonaws.com/nodegroup-image=ami-0d9a59c80a3f0d5a3
                    eks.amazonaws.com/sourceLaunchTemplateId=lt-03b3910cec7a9167d
                    eks.amazonaws.com/sourceLaunchTemplateVersion=1
                    failure-domain.beta.kubernetes.io/region=us-west-2
                    failure-domain.beta.kubernetes.io/zone=us-west-2c
                    k8s.io/cloud-provider-aws=c9f51c7f0f59038e3cb458860e7ff3c7
                    kubernetes.io/arch=amd64
                    kubernetes.io/hostname=ip-192-168-17-14.us-west-2.compute.internal
                    kubernetes.io/os=linux
                    node.kubernetes.io/instance-type=m5.large
                    topology.kubernetes.io/region=us-west-2
                    topology.kubernetes.io/zone=us-west-2c
Annotations:        alpha.kubernetes.io/provided-node-ip: 192.168.17.14
                    node.alpha.kubernetes.io/ttl: 0
                    volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp:  Fri, 07 Jul 2023 20:51:37 +1000
Taints:             <none>
Unschedulable:      false
Lease:
  HolderIdentity:  ip-192-168-17-14.us-west-2.compute.internal
  AcquireTime:     <unset>
  RenewTime:       Fri, 07 Jul 2023 21:16:57 +1000
Conditions:
  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
  ----             ------  -----------------                 ------------------                ------                       -------
  MemoryPressure   False   Fri, 07 Jul 2023 21:12:31 +1000   Fri, 07 Jul 2023 20:51:35 +1000   KubeletHasSufficientMemory   kubelet has sufficient memory available
  DiskPressure     False   Fri, 07 Jul 2023 21:12:31 +1000   Fri, 07 Jul 2023 20:51:35 +1000   KubeletHasNoDiskPressure     kubelet has no disk pressure
  PIDPressure      False   Fri, 07 Jul 2023 21:12:31 +1000   Fri, 07 Jul 2023 20:51:35 +1000   KubeletHasSufficientPID      kubelet has sufficient PID available
  Ready            True    Fri, 07 Jul 2023 21:12:31 +1000   Fri, 07 Jul 2023 20:51:49 +1000   KubeletReady                 kubelet is posting ready status
Addresses:
  InternalIP:   192.168.17.14
  ExternalIP:   34.211.17.94
  Hostname:     ip-192-168-17-14.us-west-2.compute.internal
  InternalDNS:  ip-192-168-17-14.us-west-2.compute.internal
  ExternalDNS:  ec2-34-211-17-94.us-west-2.compute.amazonaws.com
Capacity:
  attachable-volumes-aws-ebs:  25
  cpu:                         2
  ephemeral-storage:           83873772Ki
  hugepages-1Gi:               0
  hugepages-2Mi:               0
  memory:                      7910360Ki
  pods:                        29
Allocatable:
  attachable-volumes-aws-ebs:  25
  cpu:                         1930m
  ephemeral-storage:           76224326324
  hugepages-1Gi:               0
  hugepages-2Mi:               0
  memory:                      7220184Ki
  pods:                        29

System Info:
  Machine ID:                 ec2937f968c3b62f1528b0c9ed5f96bc
  System UUID:                ec2937f9-68c3-b62f-1528-b0c9ed5f96bc
  Boot ID:                    6e5e0b88-817e-48a9-a9b0-24dcbe631502
  Kernel Version:             5.10.184-175.731.amzn2.x86_64
  OS Image:                   Amazon Linux 2
  Operating System:           linux
  Architecture:               amd64
  Container Runtime Version:  containerd://1.6.19
  Kubelet Version:            v1.25.9-eks-0a21954
  Kube-Proxy Version:         v1.25.9-eks-0a21954
ProviderID:                   aws:///us-west-2c/i-04a0f9804a5e7efa8
Non-terminated Pods:          (4 in total)
  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
  kube-system                 aws-node-gnh6j              25m (1%)      0 (0%)      0 (0%)           0 (0%)         25m
  kube-system                 coredns-67f8f59c6c-dkv6f    100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     32m
  kube-system                 coredns-67f8f59c6c-dp9zg    100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     32m
  kube-system                 kube-proxy-w8f9b            100m (5%)     0 (0%)      0 (0%)           0 (0%)         25m
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  Resource                    Requests    Limits
  --------                    --------    ------
  cpu                         325m (16%)  0 (0%)
  memory                      140Mi (1%)  340Mi (4%)
  ephemeral-storage           0 (0%)      0 (0%)
  hugepages-1Gi               0 (0%)      0 (0%)
  hugepages-2Mi               0 (0%)      0 (0%)
  attachable-volumes-aws-ebs  0           0
Events:
  Type     Reason                   Age                From             Message
  ----     ------                   ----               ----             -------
  Normal   Starting                 25m                kube-proxy       
  Normal   Starting                 25m                kubelet          Starting kubelet.
  Warning  InvalidDiskCapacity      25m                kubelet          invalid capacity 0 on image filesystem
  Normal   NodeHasSufficientMemory  25m (x2 over 25m)  kubelet          Node ip-192-168-17-14.us-west-2.compute.internal status is now: NodeHasSufficientMemory
  Normal   NodeHasNoDiskPressure    25m (x2 over 25m)  kubelet          Node ip-192-168-17-14.us-west-2.compute.internal status is now: NodeHasNoDiskPressure
  Normal   NodeHasSufficientPID     25m (x2 over 25m)  kubelet          Node ip-192-168-17-14.us-west-2.compute.internal status is now: NodeHasSufficientPID
  Normal   NodeAllocatableEnforced  25m                kubelet          Updated Node Allocatable limit across pods
  Normal   RegisteredNode           25m                node-controller  Node ip-192-168-17-14.us-west-2.compute.internal event: Registered Node ip-192-168-17-14.us-west-2.compute.internal in Controller
  Normal   NodeReady                25m                kubelet          Node ip-192-168-17-14.us-west-2.compute.internal status is now: NodeReady

Cluster Components:

Depending on how your cluster is set up, the DaemonSet for the kube-proxy may be named something else, or it’s possible that it won’t use a DaemonSet at all. Regardless, the kube-proxy container should be running on all nodes in a cluster.

(base) penpen@192-168-1-124 eks % kubectl get daemonSets --namespace=kube-system kube-proxy
NAME         DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE
kube-proxy   2         2         2       2            2           <none>          48m

Kubernetes DNS

Kubernetes also runs a DNS server, which provides naming and discovery for the services that are defined in the cluster. This DNS server also runs as a replicated service on the cluster. Depending on the size of your cluster, you may see one or more DNS servers running in your cluster. The DNS service is run as a Kubernetes deployment, which manages these replicas (this may also be named coredns or some other variant):

Deployment

(base) penpen@192-168-1-124 eks % kubectl get deployments --namespace=kube-system         
NAME      READY   UP-TO-DATE   AVAILABLE   AGE
coredns   2/2     2            2           59m

Service

(base) penpen@192-168-1-124 eks % kubectl get services --namespace=kube-system        
NAME       TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)         AGE
kube-dns   ClusterIP   10.100.0.10   <none>        53/UDP,53/TCP   61m
solomem commented 1 year ago

Kuberlet

In Kubernetes, containers are usually launched by a daemon on each node called the kubelet

solomem commented 1 year ago

Namespaces

Kubernetes uses namespaces to organize objects in the cluster. You can think of each namespace as a folder that holds a set of objects. By default, the kubectl command-line tool interacts with the default namespace. If you want to use a different namespace, you can pass kubectl the --namespace flag. For example, kubectl --namespace=mystuff references objects in the mystuff namespace. You can also use the shorthand -n flag if you’re feeling concise. If you want to interact with all namespaces—for example, to list all Pods in your cluster—you can pass the --all-namespaces flag.

Context

image

solomem commented 1 year ago

kubectl

config

image


Viewing Kubernetes API Objects

Everything contained in Kubernetes is represented by a RESTful resource. Throughout this book, we refer to these resources as Kubernetes objects. Each Kubernetes object exists at a unique HTTP path; for example, https://your-k8s.com/api/v1/namespaces/default/pods/my-pod leads to the representation of a Pod in the default namespace named my-pod. The kubectl command makes HTTP requests to these URLs to access the Kubernetes objects that reside at these paths.

kubectl get <resource-name> <obj-name>

(base) penpen@192-168-1-124 eks % kubectl get nodes ip-192-168-1-189.us-west-2.compute.internal
NAME                                          STATUS   ROLES    AGE   VERSION
ip-192-168-1-189.us-west-2.compute.internal   Ready    <none>   70m   v1.25.9-eks-0a21954

output format

-o wide, json, yaml

pipe

A common option for manipulating the output of kubectl is to remove the headers, which is often useful when combining kubectl with Unix pipes (e.g., kubectl ... | awk ...). If you specify the --no-headers flag, kubectl will skip the headers at the top of the human-readable table.

extracting specific fields (jsonpath)

kubectl get pods my-pod -o jsonpath --template={.status.podIP}

view multiple objects

kubectl get pods, services

describe

kubectl describe <resource-name> <obj-name>

(base) penpen@192-168-1-124 eks % kubectl describe services kubernetes
Name:              kubernetes
Namespace:         default
Labels:            component=apiserver
                   provider=kubernetes
Annotations:       <none>
Selector:          <none>
Type:              ClusterIP
IP Family Policy:  SingleStack
IP Families:       IPv4
IP:                10.100.0.1
IPs:               10.100.0.1
Port:              https  443/TCP
TargetPort:        443/TCP
Endpoints:         192.168.57.109:443,192.168.67.134:443
Session Affinity:  None
Events:            <none>

explain

If you would like to see a list of supported fields for each supported type of Kubernetes object, you can use the explain command:

(base) penpen@192-168-1-124 eks % kubectl explain pods
KIND:       Pod
VERSION:    v1

DESCRIPTION:
    Pod is a collection of containers that can run on a host. This resource is
    created by clients and scheduled onto hosts.

FIELDS:
  apiVersion    <string>
    APIVersion defines the versioned schema of this representation of an object.
    Servers should convert recognized schemas to the latest internal value, and
    may reject unrecognized values. More info:
    https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources

  kind  <string>
    Kind is a string value representing the REST resource this object
    represents. Servers may infer this from the endpoint the client submits
    requests to. Cannot be updated. In CamelCase. More info:
    https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds

  metadata  <ObjectMeta>
    Standard object's metadata. More info:
    https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata

  spec  <PodSpec>
    Specification of the desired behavior of the pod. More info:
    https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status

  status    <PodStatus>
    Most recently observed status of the pod. This data may not be up to date.
    Populated by the system. Read-only. More info:
    https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status

watch

Sometimes you want to continually observe the state of a particular Kubernetes resource to see changes to the resource when they occur. For example, you might be waiting for your application to restart. The --watch flag enables this. You can add this flag to any kubectl get command to continuously monitor the state of a particular resource.

solomem commented 1 year ago

kubect api resources

(base) penpen@192-168-1-124 eks % kubectl api-resources        
NAME                              SHORTNAMES   APIVERSION                             NAMESPACED   KIND
bindings                                       v1                                     true         Binding
componentstatuses                 cs           v1                                     false        ComponentStatus
configmaps                        cm           v1                                     true         ConfigMap
endpoints                         ep           v1                                     true         Endpoints
events                            ev           v1                                     true         Event
limitranges                       limits       v1                                     true         LimitRange
namespaces                        ns           v1                                     false        Namespace
nodes                             no           v1                                     false        Node
persistentvolumeclaims            pvc          v1                                     true         PersistentVolumeClaim
persistentvolumes                 pv           v1                                     false        PersistentVolume
pods                              po           v1                                     true         Pod
podtemplates                                   v1                                     true         PodTemplate
replicationcontrollers            rc           v1                                     true         ReplicationController
resourcequotas                    quota        v1                                     true         ResourceQuota
secrets                                        v1                                     true         Secret
serviceaccounts                   sa           v1                                     true         ServiceAccount
services                          svc          v1                                     true         Service
mutatingwebhookconfigurations                  admissionregistration.k8s.io/v1        false        MutatingWebhookConfiguration
validatingwebhookconfigurations                admissionregistration.k8s.io/v1        false        ValidatingWebhookConfiguration
customresourcedefinitions         crd,crds     apiextensions.k8s.io/v1                false        CustomResourceDefinition
apiservices                                    apiregistration.k8s.io/v1              false        APIService
controllerrevisions                            apps/v1                                true         ControllerRevision
daemonsets                        ds           apps/v1                                true         DaemonSet
deployments                       deploy       apps/v1                                true         Deployment
replicasets                       rs           apps/v1                                true         ReplicaSet
statefulsets                      sts          apps/v1                                true         StatefulSet
tokenreviews                                   authentication.k8s.io/v1               false        TokenReview
localsubjectaccessreviews                      authorization.k8s.io/v1                true         LocalSubjectAccessReview
selfsubjectaccessreviews                       authorization.k8s.io/v1                false        SelfSubjectAccessReview
selfsubjectrulesreviews                        authorization.k8s.io/v1                false        SelfSubjectRulesReview
subjectaccessreviews                           authorization.k8s.io/v1                false        SubjectAccessReview
horizontalpodautoscalers          hpa          autoscaling/v2                         true         HorizontalPodAutoscaler
cronjobs                          cj           batch/v1                               true         CronJob
jobs                                           batch/v1                               true         Job
certificatesigningrequests        csr          certificates.k8s.io/v1                 false        CertificateSigningRequest
leases                                         coordination.k8s.io/v1                 true         Lease
eniconfigs                                     crd.k8s.amazonaws.com/v1alpha1         false        ENIConfig
endpointslices                                 discovery.k8s.io/v1                    true         EndpointSlice
events                            ev           events.k8s.io/v1                       true         Event
flowschemas                                    flowcontrol.apiserver.k8s.io/v1beta2   false        FlowSchema
prioritylevelconfigurations                    flowcontrol.apiserver.k8s.io/v1beta2   false        PriorityLevelConfiguration
ingressclasses                                 networking.k8s.io/v1                   false        IngressClass
ingresses                         ing          networking.k8s.io/v1                   true         Ingress
networkpolicies                   netpol       networking.k8s.io/v1                   true         NetworkPolicy
runtimeclasses                                 node.k8s.io/v1                         false        RuntimeClass
poddisruptionbudgets              pdb          policy/v1                              true         PodDisruptionBudget
clusterrolebindings                            rbac.authorization.k8s.io/v1           false        ClusterRoleBinding
clusterroles                                   rbac.authorization.k8s.io/v1           false        ClusterRole
rolebindings                                   rbac.authorization.k8s.io/v1           true         RoleBinding
roles                                          rbac.authorization.k8s.io/v1           true         Role
priorityclasses                   pc           scheduling.k8s.io/v1                   false        PriorityClass
csidrivers                                     storage.k8s.io/v1                      false        CSIDriver
csinodes                                       storage.k8s.io/v1                      false        CSINode
csistoragecapacities                           storage.k8s.io/v1                      true         CSIStorageCapacity
storageclasses                    sc           storage.k8s.io/v1                      false        StorageClass
volumeattachments                              storage.k8s.io/v1                      false        VolumeAttachment
securitygrouppolicies             sgp          vpcresources.k8s.aws/v1beta1           true         SecurityGroupPolicy
solomem commented 1 year ago

Creating, Updating, and Destroying Kubernetes Objects

Objects in the Kubernetes API are represented as JSON or YAML files. These files are either returned by the server in response to a query or posted to the server as part of an API request. You can use these YAML or JSON files to create, update, or delete objects on the Kubernetes server.

Let’s assume that you have a simple object stored in obj.yaml. You can use kubectl to create this object in Kubernetes by running:

$ kubectl apply -f obj.yaml Notice that you don’t need to specify the resource type of the object; it’s obtained from the object file itself.

Similarly, after you make changes to the object, you can use the apply command again to update the object:

$ kubectl apply -f obj.yaml The apply tool will only modify objects that are different from the current objects in the cluster. If the objects you are creating already exist in the cluster, it will simply exit successfully without making any changes. This makes it useful for loops where you want to ensure the state of the cluster matches the state of the filesystem. You can repeatedly use apply to reconcile state.

If you want to see what the apply command will do without actually making the changes, you can use the --dry-run flag to print the objects to the terminal without actually sending them to the server.

NOTE If you feel like making interactive edits instead of editing a local file, you can instead use the edit command, which will download the latest object state and then launch an editor that contains the definition:

$ kubectl edit <resource-name> <obj-name> After you save the file, it will be automatically uploaded back to the Kubernetes cluster.

The apply command also records the history of previous configurations in an annotation within the object. You can manipulate these records with the edit-last-applied, set-last-applied, and view-last-applied commands. For example:

$ kubectl apply -f myobj.yaml view-last-applied will show you the last state that was applied to the object.

delete object

When you want to delete an object, you can simply run:

$ kubectl delete -f obj.yaml It is important to note that kubectl will not prompt you to confirm the deletion. Once you issue the command, the object will be deleted.

Likewise, you can delete an object using the resource type and name:

$ kubectl delete <resource-name> <obj-name>

Labeling and Annotating Objects

Labels and annotations are tags for your objects. We’ll discuss the differences in Chapter 6, but for now, you can update the labels and annotations on any Kubernetes object using the label and annotate commands. For example, to add the color=red label to a Pod named bar, you can run:

$ kubectl label pods bar color=red The syntax for annotations is identical.

By default, label and annotate will not let you overwrite an existing label. To do this, you need to add the --overwrite flag.

If you want to remove a label, you can use the <label-name>- syntax:

$ kubectl label pods bar color- This will remove the color label from the Pod named bar.


Debugging Commands

kubectl also makes a number of commands available for debugging your containers. You can use the following to see the logs for a running container:

$ kubectl logs <pod-name> If you have multiple containers in your Pod, you can choose the container to view using the -c flag.

By default, kubectl logs lists the current logs and exits. If you instead want to continuously stream the logs back to the terminal without exiting, you can add the -f (follow) command-line flag.

You can also use the exec command to execute a command in a running container:

$ kubectl exec -it <pod-name> -- bash This will provide you with an interactive shell inside the running container so that you can perform more debugging.

If you don’t have bash or some other terminal available within your container, you can always attach to the running process:

$ kubectl attach -it <pod-name> The attach command is similar to kubectl logs but will allow you to send input to the running process, assuming that process is set up to read from standard input.

copy files

You can also copy files to and from a container using the cp command:

$ kubectl cp <pod-name>:</path/to/remote/file> </path/to/local/file> This will copy a file from a running container to your local machine. You can also specify directories, or reverse the syntax to copy a file from your local machine back out to the container.

If you want to access your Pod via the network, you can use the port-forward command to forward network traffic from the local machine to the Pod. This enables you to securely tunnel network traffic through to containers that might not be exposed anywhere on the public network. For example, the following command:

$ kubectl port-forward <pod-name> 8080:80 opens up a connection that forwards traffic from the local machine on port 8080 to the remote container on port 80.

NOTE

You can also use the port-forward command with services by specifying services/ instead of , but note that if you do port-forward to a service, the requests will only ever be forwarded to a single Pod in that service. They will not go through the service load balancer.

Event

If you want to view Kubernetes events, you can use the kubectl get events command to see a list of the latest 10 events on all objects in a given namespace:

$ kubectl get events You can also stream events as they happen by adding --watch to the kubectl get events command. You may also wish to include -A to see events in all namespaces.

Finally, if you are interested in how your cluster is using resources, you can use the top command to see the list of resources in use by either nodes or Pods. This command:

$ kubectl top nodes will display the total CPU and memory in use by the nodes in terms of both absolute units (e.g., cores) and percentage of available resources (e.g., total number of cores). Similarly, this command:

$ kubectl top pods will show all Pods and their resource usage. By default it only displays Pods in the current namespace, but you can add the --all-namespaces flag to see resource usage by all Pods in the cluster.

These top commands only work if a metrics server is running in your cluster. Metrics servers are present in nearly every managed Kubernetes environment and many unmanaged environments as well. But if these commands fail, it may be because you need to install a metrics server.

Cluster Management

The kubectl tool can also be used to manage the cluster itself. The most common action that people take to manage their cluster is to cordon and drain a particular node. When you cordon a node, you prevent future Pods from being scheduled onto that machine. When you drain a node, you remove any Pods that are currently running on that machine. A good example use case for these commands would be removing a physical machine for repairs or upgrades. In that scenario, you can use kubectl cordon followed by kubectl drain to safely remove the machine from the cluster. Once the machine is repaired, you can use kubectl uncordon to re-enable Pods scheduling onto the node. There is no undrain command; Pods will naturally get scheduled onto the empty node as they are created. For something quick affecting a node (e.g., a machine reboot), it is generally unnecessary to cordon or drain; it’s only necessary if the machine will be out of service long enough that you want the Pods to move to a different machine.

Command Autocompletion

kubectl supports integration with your shell to enable tab completion for both commands and resources. Depending on your environment, you may need to install the bash-completion package before you activate command autocompletion. You can do this using the appropriate package manager:

macOS

$ brew install bash-completion

CentOS/Red Hat

$ yum install bash-completion

Debian/Ubuntu

$ apt-get install bash-completion When installing on macOS, make sure to follow the instructions from brew about how to activate tab completion using your ${HOME}/.bash_profile.

Once bash-completion is installed, you can temporarily activate it for your terminal using:

$ source <(kubectl completion bash) To make this automatic for every terminal, add it to your ${HOME}/.bashrc file:

$ echo "source <(kubectl completion bash)" >> ${HOME}/.bashrc If you use zsh, you can find similar instructions online.

Alternative Ways of Viewing Your Cluster

In addition to kubectl, there are other tools for interacting with your Kubernetes cluster. For example, there are plug-ins for several editors that integrate Kubernetes and the editor environment, including:

Visual Studio Code IntelliJ Eclipse If you are using a managed Kubernetes service, most of them also feature a graphical interface to Kubernetes integrated into their web-based user experience. Managed Kubernetes in the public cloud also integrates with sophisticated monitoring tools that can help you gain insights into how your applications are running.

There are also several open source graphical interfaces for Kubernetes including Rancher Dashboard and the Headlamp project.

Summary

kubectl is a powerful tool for managing your applications in your Kubernetes cluster. This chapter has illustrated many of the common uses for the tool, but kubectl has a great deal of built-in help available. You can start viewing this help with:

$ kubectl help or:

$ kubectl help

solomem commented 1 year ago

EKS Tutorials

1. Amazon EKS Explained

00:00 Managed control plane 01:51 Data plane management 03:03 Managed node groups 04:00 Fargate 04:54 Certified Kubernetes Conformant 05:25 Operating EKS 07:15 Integration with AWS image image


2. What is a container? (2022)

0:00 Intro 0:35 Microservices 5:05 Standardization 10:56 Efficiency image


3. What is Kubernetes? | 2022

00:00 Intro to containers 01:28 Container microservices example 02:13 Kubernetes scheduling 03:23 Self-healing and auto-scaling 05:00 Networking and services 06:38 Open-source 07:15 Kubernetes in AWS

Challenges:


4. Kubernetes Pods, ReplicaSets, and Deployments in 5 Minutes

0:00 Intro to Pods 0:45 Pod Configuration 2:10 ReplicaSets and Deployments 3:50 The Controller/Control Loop

Think of the microservice architecture when creating a pod. Each microservice within the broader set of microservices could be powered by a pod. You wont necessarily want to put multiple microservices into the same pod because you would want to have that modularity and the ability to scale each component individually of one another.

- Kubernetes manifest file

This tells the kubernetes how to deploy something Example:

1) Pod

image

Pod is not usually used. Instead we use replicateset

2) Replicate set

The replicate finds the match labels, and deploy the pod that belongs to the deployment image

3) Scale and health


kubernetes control loop

image


5. Karpenter for Kubernetes | Karpenter vs Cluster Autoscaler

Karpenter is a compute provisioning and management solution, which also acts as a cluster autoscaler. In this video, learn how open-source Karpenter works, and how it differs from Kubernetes Cluster Autoscaler.

Also can run karpenter in Fargate.

0:00 Autoscaling Basics 2:23 Cluster Autoscaler 3:27 Karpenter 4:41 Constraints and Limits 5:10 Workload Consolidation 6:19 Automatically update nodes

image

solomem commented 1 year ago

What is GitOps? | GitOps vs DevOps

Flux image

solomem commented 1 year ago

EKS Blueprints for Terraform Explained

Implicit dependency

module vpc
...

module eks
 vpc = vpc.id
...

Crossplane on Kubernetes Explained