solomem / DevOps

0 stars 0 forks source link

AKS Tutorial #9

Open solomem opened 1 year ago

solomem commented 1 year ago

Pre-requisite: AWS Account and CLI setup

  1. Create a user, with an assume policy attached
  2. Create a role (AWS account), gives external ID, and add AdministratorAccess. Role name: admin-access-explore-aks Image
  3. Copy the role ARN: arn:aws:iam::197605493344:role/admin-access-explore-aks
  4. Use the AWS CLI to assume role: aws sts assume-role --role-arn arn:aws:iam::197605493344:role/admin-access-explore-aks --role-session-name MySession --external-id explore1770 --profile solomemaks Image
  5. export the AWS_ACCESS_KEY
    • WINDOWS:

      setx AWS_ACCESS_KEY_ID ASIAS4ARURJQGRZSZHDZ

setx AWS_SECRET_ACCESS_KEY j/6c2VYbagqneYoOMqofPYw58bxHi0I2D1p4w4OM

setx AWS_SESSION_TOKEN IQoJb3JpZ2luX2VjEF8aCXVzLXdlc3QtMiJGMEQCIG0kK7Dsl630LYhtEfLNvMTaqALWNS1cP0yfoQoirRBVAiBqHLK+FDrPw5ce3EPE9wO5P7j//C2ecWt0SSxQDgBNmCqfAgiY//////////8BEAMaDDE5NzYwNTQ5MzM0NCIM5gOmgA60ezW6o2PSKvMBoJkCJlwV1NvCXeA+mChmvJbeBX1dOAmzCXS72NAsFl3o8STv68QypSihoV0jM8INKVA6sbILYoJYNY1Si0fBuU2YpYl6LE6LbGCPYhZ/kJE5iJ4dYXBqIG9WydvPakm81nC7xbUWmP1s2jcQxikJxAEEy8gNdsX+wLYEJym2gtsaBHLtkhqKCEg1QxBAxjnZhxx0b8d7j9IC1g+JeGYwAdBadcCw6YM/Z6+u/63n82ltDos3OJ6bjQeBK1itK8rnZ9b3RcolfsI8qp55dUGKND5hLIvmW6e50cav9DeDdsgoOepu41j1zqZ1U5KsjlgDmJD2MKqjjp0GOp4BaflY80LLE82J1WRhUu+ChupDNW1h6mq4Fo2Wb7HLNxVf7MfYfAbfiVS2sWaWqhRSmW/QtlT0AE8fZH1TwthbMzJW/FPxoXCmHzQ+S445JiYiFgY7U8a0OCxa9w8LbfU02tlPpRBfY4zmS9WDGvVuh80Zm5eZ+KifnqbyDBal5C0AkwrUF2x+sRj0L/REKs69pO69wwb5W3gCbyceHmw=

  • LINUX (prefered!): export AWS_ACCESS_KEY_ID =ASIAS4ARURJQGRZSZHDZ

export AWS_ACCESS_KEY=j/6c2VYbagqneYoOMqofPYw58bxHi0I2D1p4w4OM

export AWS_SESSION_TOKEN=IQoJb3JpZ2luX2VjEF8aCXVzLXdlc3QtMiJGMEQCIG0kK7Dsl630LYhtEfLNvMTaqALWNS1cP0yfoQoirRBVAiBqHLK+FDrPw5ce3EPE9wO5P7j//C2ecWt0SSxQDgBNmCqfAgiY//////////8BEAMaDDE5NzYwNTQ5MzM0NCIM5gOmgA60ezW6o2PSKvMBoJkCJlwV1NvCXeA+mChmvJbeBX1dOAmzCXS72NAsFl3o8STv68QypSihoV0jM8INKVA6sbILYoJYNY1Si0fBuU2YpYl6LE6LbGCPYhZ/kJE5iJ4dYXBqIG9WydvPakm81nC7xbUWmP1s2jcQxikJxAEEy8gNdsX+wLYEJym2gtsaBHLtkhqKCEg1QxBAxjnZhxx0b8d7j9IC1g+JeGYwAdBadcCw6YM/Z6+u/63n82ltDos3OJ6bjQeBK1itK8rnZ9b3RcolfsI8qp55dUGKND5hLIvmW6e50cav9DeDdsgoOepu41j1zqZ1U5KsjlgDmJD2MKqjjp0GOp4BaflY80LLE82J1WRhUu+ChupDNW1h6mq4Fo2Wb7HLNxVf7MfYfAbfiVS2sWaWqhRSmW/QtlT0AE8fZH1TwthbMzJW/FPxoXCmHzQ+S445JiYiFgY7U8a0OCxa9w8LbfU02tlPpRBfY4zmS9WDGvVuh80Zm5eZ+KifnqbyDBal5C0AkwrUF2x+sRj0L/REKs69pO69wwb5W3gCbyceHmw=

Get the AWS credential


bb=$(aws sts assume-role --role-arn arn:aws:iam::197605493344:role/admin-access-explore-aks --role-session-name MySession --external-id explore1770| grep 'AccessKeyId' | cut -d":" -f2-)
AWS_ACCESS_KEY_ID=${bb::-1}
bb=$(aws sts assume-role --role-arn arn:aws:iam::197605493344:role/admin-access-explore-aks --role-session-name MySession --external-id explore1770| grep 'SecretAccessKey' | cut -d":" -f2-)
AWS_ACCESS_KEY=${aa::-1}
bb=$(aws sts assume-role --role-arn arn:aws:iam::197605493344:role/admin-access-explore-aks --role-session-name MySession --external-id explore1770 | grep 'SessionToken' | cut -d":" -f2-)
AWS_SESSION_TOKEN=${bb::-1}
echo $AWS_ACCESS_KEY_ID
echo $AWS_ACCESS_KEY
echo $AWS_SESSION_TOKEN
aws ec2 describe-instances
solomem commented 1 year ago

Image

solomem commented 1 year ago

Docker

Image

Dockerfile

Image

Docker Run

Docker then uses container D to create containers mounted from the images by running Docker run in that same terminal window.

Kubernetes

Image

pods: are the smallest unit of computers / kubelets

Image

The smallest unit of compute and Kubernetes is called the pod. Kubernetes pods group similar containers into a logical unit. Containers inside of Kubernetes pods have the same IP address and move quote unquote together from host to host. Pods get scheduled onto Kubernetes Cubelets or nodes, which you can see an example of in the center of your screen in the dashed box. Cubelets like these are machines that host pods in their containers and configure an aggregate between them. While you can create pods directly, you won't want to do this most of the time. Instead, you'll want to create them through resources called deployments. Kubernetes deployments provide a desired state for pods in an application.

Deployments: scale and manage Pods

With deployments, you can easily scale the number of pods as needed by your application either manually or automatically. You can also control how new pods join your deployment, which is really helpful while releasing new versions of applications.

Services: makes Pods discoverable

Pods within deployments otherwise are discovered inside of Kubernetes networks through the resources called services. Services provide a single IP address and a DNS record for a group of related pods. Like pods, IP addresses for services can be reached by any container in a Kubernetes cluster. However, unlike pods, services give developers more control over when and how those IP addresses become available. Kubernetes can even attach load balancers provided by cloud providers to these services, which makes it even easier for application pods to be reached throughout your cluster and from the outside world.

Image

Ingresses and ingress controllers: make your app internet accessible

Speaking of routine applications on Kubernetes from the outside world, Kubernetes provides really convenient objects called ingresses for explicitly this. Ingresses are HTTP reversed proxies that allow the outside world to reach Kubernetes pods through one or more routing rules. These rules are created and managed through things called ingress controllers.


Example:

Image

Here's an example of how powerful Kubernetes can be in this regard. Let's say we wanted to create a forum for Explore California at explorecalifornia.com/forum so that customers can discuss cool things that they've experienced thanks to using Explorer California. The software we use to power the forums will probably be completely different from our main website. Pre-Kubernetes, you'd have to configure a web server to create two V hosts or virtual hosts and use service IDRL we write rules to handle this. It's possible, but it's a pain. With Kubernetes, our ingress can use those same rewrite rules to point to two completely separate pods like you can see here above the ingress box. This is really nice when you actually use it and we will. There are many more objects that Kubernetes uses to achieve planetary scale container orchestration, but to save time, we won't cover them all. You can learn about the wide world of the Kubernetes objects by reading the Kubernetes documentation at https://Kubernetes.io.

solomem commented 1 year ago

ngix

The nginx.conf already privodes the entrypoint so we donnot have to specify in the Dockerfile.

solomem commented 1 year ago

Getting the docker ready

docker build -t explorecalifornia.com . docker run --rm --name explorecalifornia.com -p 5000:80 explorecalifornia.com

solomem commented 1 year ago

Makefile

run_website: docker build -t explorecalifornia.com . && \ docker run -p 5000:80 -d --name explorecalifornia.com --rm explorecalifornia.com



------

## run make
`maker run_website`

## `Phony` targets ensure that Make rules that do not write files of the same name execute regardless of whether they exist or not.
solomem commented 1 year ago

Kind: Kubernetes in docker

Docker is the only application needed to run this.

Image

Image


install kind:

https://kind.sigs.k8s.io/docs/user/quick-start/#installation

in linux, download the kind curl --location --output ./kind https:......


Example Makefile

#!/usr/bin/env make

.PHONY: run_website install_kind

run_website:
    docker build -t explorecalifornia.com . && \
        docker run -p 5000:80 -d --name explorecalifornia.com --rm explorecalifornia.com

install_kind:
    curl --location -o ./kind https://github.com/kubernetes-sigs/kind/releases/download/v0.17.0/kind-windows-amd64 && \
      ./kind --version

run make install_kind


Create kind cluster

kind create cluster --name <name_of_cluster>

Image

delete kind cluster

./kind delete cluster --name explorecalifornia.com

kubectl

$ kubectl get nodes
NAME                                  STATUS   ROLES           AGE     VERSION
explorecalifornia.com-control-plane   Ready    control-plane   3m25s   v1.25.3

namespace

resources and Kubernetes can be separated into things called namespaces. Namespaces allow you to organize, categorize, and secure Kubernetes resources between each other. This is useful if you have many different suites of applications running inside of a single cluster. Let's see if the pods running inside a Kubernetes "system," quote, unquote, namespace are healthy. To do this we can run kubectl, get pods, and then we can append dash N, which is short for namespace, and then we can type kube system. So this gives us a list of pods that are running inside of the kube system namespace. As we can see, since they're all in the running state, then that means that all of the pods that we need to run kind are happy and healthy.

Check the kube-system namespace

$ kubectl get pods --namespace kube-system
NAME                                                          READY   STATUS    RESTARTS   AGEcoredns-565d847f94-4jbvv                                      1/1     Running   0          6m55s
coredns-565d847f94-dgddz                                      1/1     Running   0          6m55s        
etcd-explorecalifornia.com-control-plane                      1/1     Running   0          7m8s
kindnet-6xqsb                                                 1/1     Running   0          6m55s        
kube-apiserver-explorecalifornia.com-control-plane            1/1     Running   0          7m8s
kube-controller-manager-explorecalifornia.com-control-plane   1/1     Running   0          7m8s
kube-proxy-zxbpj                                              1/1     Running   0          6m55s        
kube-scheduler-explorecalifornia.com-control-plane            1/1     Running   0          7m8s
solomem commented 1 year ago

kubetcl

Image

use kubectl

$ kubectl get nodes
NAME                                  STATUS   ROLES           AGE     VERSION
explorecalifornia.com-control-plane   Ready    control-plane   3m25s   v1.25.3
-n change namespace
solomem commented 1 year ago

Local Docker registry

The relationship between Kind and the Docker Registry? You must create a local Docker registry and link it to the same network as your kind cluster to run local images.

Image

Image

Image

Create a container from registry:2 image, and run it:

docker run --name local-registry -d --restart=always -p 5000:5000 registry:2

To test if the registry image is working? curl --location http://localhost:/v2

Earlier we mentioned that the From image used by our Dockerfile is sourced from a registry of images called the Docker hub. I also mentioned that Docker images can be stored in private registries. When we run our app inside of Kubernetes, Kubernetes is going to expect that our application comes with a pre-built docker image. While we were building our docker image locally to test our website inside a Docker, Kubernetes doesn't know about the existence of local images. Additionally, Kubernetes doesn't have a built-in way of taking Dockerfiles and turning them into docker images for you. One way of working around this is by pushing our local image into the Docker hub. Anyone can do it as long as they have a Docker hub account, which is free. However, there are two problems with this. First, the Docker hub isn't always the best place to put images. Some companies like Explore California, want to keep their images private. The last thing we want is for explore (indistinct), to learn even more about our cool tracks. Secondly, Docker has restrictions in place that cap the number of docker images pull from Docker hub per hour. This would hinder our testing significantly. An easier way of working around this is by creating our own Docker registry locally. Since Docker makes the docker image for running Docker registries publicly available, we can easily use that to create one locally that we can push our image into. Moreover, we can configure kind to use this registry so that Kubernetes does become aware of our local Docker images. Enough talking, let's do it. Let's keep our make file open in a separate window so that we can update it as we run these commands. Starting the registry is actually really easy. All you have to do is run this command - Docker run -- name local registry - D -- restart set to always -P5000:5000 and then registry:2. So what does this command actually doing? Well, let's take a closer look. The first thing we do is we create a container whose name is called local registry. Then we send that container to the background with -D so that we're not blocked by any logs. Additionally, we set the restart property for this container to always, so that in case the registry breaks for any reason, Docker will just restart the container. Then, we map the containers, port 5000 to our hosts on port 5000 as well. And finally, we use the registry Docker image on the Docker hub, and we're using version 2 of the Docker registry as there are two versions. In order to specify that, we separate the name of the container image with the name of the tag by colon. So now that we know what this command is doing, let's test that the Docker registry is actually working. To do that, I am going to use the curl command once again, and I'm going to type http://localhost:5000/v2. So it looks like we got (indistinct) permanently message when we did that. This is actually a perfect example of how --location helps here. So let's try re-running this command, but let's put --location right before the URL. As you can see, and it might be a little bit hard to see, we got a blank JSON object here, which means that we don't have any images in our registry just yet, but it does mean that it's working.

create docker registry container:

    if docker ps | grep -q 'local-registry'; \
    then echo "---> local-registry already created; skipping"; \
    else docker run --name local-registry -d --restart=always -p 5000:5000 registry:2; \
    fi

https://hub.docker.com/_/registry

Image

solomem commented 1 year ago

kind configuration file

Kind Configuration files are used to configure the various sub-components inside of our kind clusters. The configuration file that we're going to write is going to tell the container run time inside of our kind cluster about the new registry that we just created.

  1. This uses the kubenetes manifest format. Kubernetes manifests are files that are used to install and configure objects with a kubernetes cluster Every Kubernetes manifests starts by describing the kind of object we're trying to install or configure. This is described by the kind line online one.

  2. Kube ADM Kind uses a tool called Kube ADM to create the cluster that we'll use here. Kube ADM also configures the container runtime use within Kubernetes to create, manage and delete clusters. Kind uses container D for its container run time. Container D config patches on line three, provides configurationchanges for container D that Kube ADM will need to be aware of when we're creating our cluster`. In our case, we want to tell container D, "Hey container D we created a Docker registry called local registry. You can also reach it at local host on port 5,000." You should really know about it. That's done on lines five and six right here.

kind_config.yaml

kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
containerdConfigPatches:
- |-
  [plugins."io.containerd.grpc.v1.cri".registry.mirrors."localhost:5000"]
    endpoint = ["http://local-registry:5000"]
nodes:
- role: control-plane
  kubeadmConfigPatches:
  - |
    kind: InitConfiguration
    nodeRegistration:
      kubeletExtraArgs:
        node-labels: "ingress-ready=true"
  extraPortMappings:
  - containerPort: 80
    hostPort: 80
    protocol: TCP
  - containerPort: 443
    hostPort: 443
    protocol: TCP

kind_configmap.yaml

apiVersion: v1
kind: ConfigMap
metadata:
  name: local-registry-hosting
  namespace: kube-public
data:
  localRegistryHosting.v1: |
    host: "localhost:${reg_port}"
    help: "https://kind.sigs.k8s.io/docs/user/local-registry/"

kubernetes API reference

https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.22/


  1. Config map Next we'll need to provide another configuration, the kind to tell it that we're using a local registry. This is done through something called a config map, a native Kubernetes object. Config maps and Kubernetes are used to provide configuration data into pods running inside of Kubernetes.

configmap.yaml

apiVersion: v1
kind: ConfigMap
metadata:
  name: local-registry-hosting
  namespace: kube-public
data:
  localRegistryHosting.v1: |
    host: "localhost:${reg_port}"
    help: "https://kind.sigs.k8s.io/docs/user/local-registry/"
solomem commented 1 year ago

curl

You've executed the following command:

> curl --output foo.txt https://example.com/document.txt

Which of these describes what happens next most accurately?

document.txt will be written to foo.txt, but only if https://example.com/document.txt does not contain any redirects.

document.txt at example.com will get written to foo.txt because of the --output flag, but you'll need to make sure that --location or -L is specified first to account for any redirects.

solomem commented 1 year ago

containerd

Containerd is the runtime that Kubernetes, and subsequently, kind, uses to create and manage containers. runc is used explicitly to start containers.

solomem commented 1 year ago

kind info

While Kind creates Kubernetes clusters that could be used for actual Kubernetes workloads, it is a stripped-down Kubernetes distribution that is not suitable for production workloads.

Kind creates Kubernetes clusters within Docker. Kind is not suitable for creating bare-metal Kubernetes clusters.

Kind allows you to create Kubernetes clusters within Docker for quick testing and prototyping.

solomem commented 1 year ago

Manifest

Create deployment. Deployments will make sure that the number of pods we want hosting our website, stay up and running at all times

Kubernetes creates resources through YAML files called Kubernetes manifests. You can think of them as an application describing what you're looking to create and what the thing you're looking to create looks like.

create a black manifest

kubectl create deployment --dry-run=client/server --image localhost:5000/<image_name> <name_of_deployment> --output=yaml kubectl create deployment --dry-run=client --image localhost:5000/explorecalifornia.com explorecalifornia.com --output=yaml > deployment.yaml

solomem commented 1 year ago

how to deploy (all in one)?

Image

  1. disable the IIS in windows

  2. create docker registry + create the kind cluster make create_kind_cluster_step1

  3. tag the docker image + push docker image to the local docker registery: docker push localhost:5000/explorecalifornia.com

  4. Create network btw kind and docker registry: docker network connect kind local-registry (make create_kind_cluster_step2)

  5. Create a deployment kubectl apply -f deployment.yaml

  6. Verify the deployment use selector to see all the pods associated with the deployment:

    $ kubectl get pods -l app=explorecalifornia.com
    NAME                                     READY   STATUS    RESTARTS   AGE
    explorecalifornia.com-64bc45ddc7-2nv5s   1/1     Running   0          2m39s
  7. Ports forwarding This mounts the ports inside your machine to the ports inside the pods of your deployment outside port:inside port

    $ kubectl port-forward deployment/explorecalifornia.com 8080:80
    Forwarding from 127.0.0.1:8080 -> 80
    Forwarding from [::1]:8080 -> 80
  8. Service kubectl create service clusterip --dry-run=client --tcp=80:80 explorecalifornia.com --output=yaml > service.yaml kubectl apply -f service.yaml

  9. double check kubectl get all -l app=explorecalifornia.com

  10. port forward: kubectl port-forward service/explorecalifornia-svc 8080:80

  11. ingress kubectl create ingress explorecalifornia.com --rule="explorecalifornia.com/=explorecalifornia-svc:80" --dry-run=client --output=yaml > ingress.yaml

  12. install kind ingress controller kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/main/deploy/static/provider/kind/deploy.yaml

  13. Verification of ingress controller kubectl get all -n ingress-nginx

  14. deploy ingress kubectl apply -f ingress.yaml

solomem commented 1 year ago

Service

To allow user to access the explorecalifornia.com, rather than to have to port forward into the cluster every time they want to book a trip. > To do that, we have to map the explorecalifornia.com into a single point of entry > Service in Kubernetes

service.yaml

apiVersion: v1
kind: Service
metadata:
  labels:
    app: explorecalifornia.com
  name: explorecalifornia-svc
spec:
  ports:
  - name: 80-80
    port: 80
    protocol: TCP
    targetPort: 80
  selector:
    app: explorecalifornia.com
  type: ClusterIP
status:
  loadBalancer: {}

kubectl create service clusterip --dry-run=client --tcp=80:80 explorecalifornia.com --output=yaml > service.yaml kubectl apply -f service.yaml will get the error message:

$ kubectl apply -f service.yaml
The Service "explorecalifornia.com" is invalid: metadata.name: Invalid value: "explorecalifornia.com": a DNS-1035 label must consist of lower case alphanumeric characters or '-', start with an alphabetic character, and end with an alphanumeric character (e.g. 'my-name',  or 'abc-123', regex used for validation 
is '[a-z]([-a-z0-9]*[a-z0-9])?')

We have to change the metadata:name to explorecalifornia-svc

run: kubectl apply -f service.yaml

run to check:

$ kubectl get all -l app=explorecalifornia.com
NAME                                         READY   STATUS    RESTARTS   AGE
pod/explorecalifornia.com-64bc45ddc7-dj9c8   1/1     Running   0          38m

NAME                            TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)   AGE
service/explorecalifornia-svc   ClusterIP   10.96.85.93   <none>        80/TCP    35m

NAME                                    READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/explorecalifornia.com   1/1     1            1           38m

NAME                                               DESIRED   CURRENT   READY   AGE
replicaset.apps/explorecalifornia.com-64bc45ddc7   1         1         1       38m

The clusterip is created, and the port is mapped to it.

forward the ports again in service

$ kubectl port-forward service/explorecalifornia-svc 8080:80
Forwarding from 127.0.0.1:8080 -> 80
Forwarding from [::1]:8080 -> 80
solomem commented 1 year ago

Ingress

in Kubernetes, an Ingress is a reverse proxy that enables external access into other Kubernetes resources. If you've ever used an application load balancer like AWS Application Load Balancer, very appropriately named,Azure API Gateway` or Google's Cloud Load Balancer, or if you've ever used a "Bare Metal Load Balancer" like F5's BIG-IP, then the Kubernetes Ingress will probably be familiar to you.

What we're doing with Explore California and NginX is actually very, very similar to how Ingresses work here.

Image

Here's a great diagram from the Kubernetes documentation that explains how Ingresses work.

  1. First, you provide your Ingress with a series of routing rules which are summarized here in the middle. Like our make rules earlier, routing rules perform an action when given a name. In the case of our Ingress, instead of using a make target, we provide the HTTP path like slash or slash shopping.
  2. Also in the case of our Ingress, the action is to send traffic to a port for a specified Kubernetes service writing rules allow us to send requests to multiple different pods without having to create separate DNS records for them, which is really nice.

Image

Let's use a slightly tweaked version of this image to explain what I mean. So let's say that Explore California wanted to move both our website and ourbooking service, a separate application into Kubernetes. Instead of having to maintain a separate complicated nginx configuration to send requests between the two services, we can create an Ingress with two routing rules That looks something like what you see here on the right. This isn't the exact syntax we'll use in our Ingress, but I hope the concept and the simplicity is clear.

The rules in our Ingress are processed by another Kubernetes object called an Ingress controller. The Ingress controller is essentially an instance of nginX doing the complicated routing for us.

A Kubernetes cluster can have many and multiple different kinds of Ingress controllers. Each have their advantages and disadvantages.

Paths, and Ingress rules don't have to be exact either. You can specify roles with many different kinds of patterns, even regexes or regular expressions work.

We'll use the prefix matching rules for our Ingress to keep things simple though, since we're already using nginx, we're going to go ahead and use the nginx Ingress controller for explore California. That's a really brief overview of how Ingresses work. So now that we know a little bit more about them, let's go ahead and create one.


To get the help page: kuberctc create ingress --help

Create ingress rule: kubectl create ingress explorecalifornia.com --rule="explorecalifornia.com/=explorecalifornia-svc:80" --dry-run=client --output=yaml > ingress.yaml

Change the pathType: Prefix and remove status


Ingress controller in kind

https://kind.sigs.k8s.io/docs/user/ingress/#setting-up-an-ingress-controller

Image

! making sure the zscaler is off!

$ kubectl get all -n ingress-nginx
NAME                                           READY   STATUS      RESTARTS   AGE
pod/ingress-nginx-admission-create-4x92q       0/1     Completed   0          80s
pod/ingress-nginx-admission-patch-7wkbh        0/1     Completed   0          80s
pod/ingress-nginx-controller-6bccc5966-ww7tw   1/1     Running     0          80s

NAME                                         TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)
           AGE
service/ingress-nginx-controller             NodePort    10.96.95.135   <none>        80:30472/TCP,443:31539/TCP   80s
service/ingress-nginx-controller-admission   ClusterIP   10.96.97.117   <none>        443/TCP
           80s

NAME                                       READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/ingress-nginx-controller   1/1     1            1           80s

NAME                                                 DESIRED   CURRENT   READY   AGE
replicaset.apps/ingress-nginx-controller-6bccc5966   1         1         1       80s

NAME                                       COMPLETIONS   DURATION   AGE
job.batch/ingress-nginx-admission-create   1/1           14s        80s
job.batch/ingress-nginx-admission-patch    1/1           15s        80s

Install the ingress controler:

Image

kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/main/deploy/static/provider/kind/deploy.yaml
kubectl wait --namespace ingress-nginx \
  --for=condition=ready pod \
  --selector=app.kubernetes.io/component=controller \
  --timeout=90s

deploy the ingress controller

ingress.yaml

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: explorecalifornia.com
spec:
  rules:
  - host: explorecalifornia.com
    http:
      paths:
      - backend:
          service:
            name: explorecalifornia-svc
            port:
              number: 80
        path: /
        pathType: Prefix

When we go to host explorecalifornia.com at /, all the traffic go to explorecalifornia-svc

Create local DNS mapping

Image

apply ingress rule

$ kubectl apply -f ingress.yaml 
ingress.networking.k8s.io/explorecalifornia.com created

Finally

visit: explorecalifornia.com

solomem commented 1 year ago

Common Issues

etc/hosts

Why must one modify /etc/hosts to test an Ingress locally when using the NGINX ingress controller?

The NGINX ingress controller matches on the "host" provided within the Ingress. Consequently, connecting from "localhost" would not be accepted by the Ingress.

Service name:

Example:

rules:
- host: explorecalifornia.com
  paths:
  - backend:
      service:
        name: explorecaliforniacheckout.com
        port: 5000
     path: '/checkout'
     pathType: Prefix
  - backend:
      service:
        name: explorecaliforniaanalytics.com
        port: 5000
     path: '/clicktrack'
     pathType: Prefix

Services cannot have DNS record labels in them. Rename them so that they do not look like DNS records.

explorecaliforniacheckout.com >explorecaliforniacheckout-svc

test on the deployment

You're creating a website. Its Pod will be created through Deployment "my_deployment". What is the easiest way to test that is working locally inside of Kubernetes?

Use kubectl port-forward deployment/my_deployment 8080:80. Open a web browser. Visit localhost:8080.

deployment manifest

According to the Kubernetes API Reference Docs, Deployment inside of apps/v1 needs to look like this:

apiVersion: [VERSION] 
kind: Deployment 
metadata: [ObjectMeta object] 
spec: [DeploymentSpec] 

Furthermore, the ObjectMeta object does not have a key for "spec". Finally, we can see that "spec" is actually it's own key.

Assuming that your network connection is unreliable, which of these statements is most correct?

kubectl apply --dry-run=client will generate a manifest YAML or JSON based on the API information stored on the client. While --dry-run=server is more accurate since it takes your Kubernetes server into account, this is not desirable in an environment in an unreliable network.

solomem commented 1 year ago

Helm

Image

Image

Image

install Helm

Windows: choco install kubernetes-helm

The install of kubernetes-helm was successful.
Software installed to 'C:\ProgramData\chocolatey\lib\kubernetes-helm\tools'

Helm custom functions:

https://helm.sh/docs/chart_template_guide/function_list/

{{ .Values.foo | lower}}
{{ .Values.foo | upper }}
{{ .Values.foo | title }}

...

Sematic versioning:

MAJOR:MINOR:PATCH 1.0.0


Create KIND cluster with ingress rules.

  1. What is needed before running helm?

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/ingress-nginx-controller NodePort 10.96.125.239 80:31441/TCP,443:31870/TCP 57s service/ingress-nginx-controller-admission ClusterIP 10.96.94.49 443/TCP 57s

NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/ingress-nginx-controller 1/1 1 1 58s

NAME DESIRED CURRENT READY AGE replicaset.apps/ingress-nginx-controller-6bccc5966 1 1 1 58s

NAME COMPLETIONS DURATION AGE job.batch/ingress-nginx-admission-create 1/1 14s 58s job.batch/ingress-nginx-admission-patch 1/1 15s 58s


## Create Helm Template:
1. Create directory `chart` in the project dir
2. Create `Chart.yaml` file with Capital `C`
3. Create 'values.yaml` file with lowercase `v`
4. Validate the helm file 
`helm show all ./chart`
5. Create `template.yaml`
6. Move deployment, service and ingress.yamls to `chart/templates`
`mv {deployment,service,ingress}.yaml ./chart/templates`
7. Render template:
- run to validate that your values were properly accepted, helm template is used to render templates inside of a chart.
`helm template ./char`

![Image](https://user-images.githubusercontent.com/44691256/209470551-f30c9495-55b7-4433-bd29-d0889d579db6.png)

9. deploy helm template
- **Helm upgrade is preferred over helm install when we want to install an application over itself through multiple revisions.**
- Helm install only allows you to install an application once. To upgrade it, you would need to run "helm uninstall". Helm upgrade fixes this problem.

`helm upgrade --atomic --install explore-california-website ./chart`

`helm uninstall explore-california-website`
`helm install explore-california-website ./chart`

$ helm status explore-california-website NAME: explore-california-website LAST DEPLOYED: Mon Dec 26 23:13:00 2022 NAMESPACE: default STATUS: deployed REVISION: 1 TEST SUITE: None


Once the helm is deployed:

$ kubectl get all NAME READY STATUS RESTARTS AGE pod/explore-california-website-8587cd96b5-6t2nv 1/1 Running 0 4h29m

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/explorecalifornia-svc ClusterIP 10.96.14.127 80/TCP 4h29m service/kubernetes ClusterIP 10.96.0.1 443/TCP 5h19m

NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/explore-california-website 1/1 1 1 4h29m

NAME DESIRED CURRENT READY AGE replicaset.apps/explore-california-website-8587cd96b5 1 1 1 4h29m

solomem commented 1 year ago

kubectl delete with selector

kubectl delete all -l app=explorecalifornia.com

solomem commented 1 year ago

old ingress.yaml

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: {{ .Values.appName }}
  annotations:
    kubernetes.io/ingress.class: alb
    alb.ingress.kubernetes.io/scheme: internet-facing
    alb.ingress.kubernetes.io/target-type: ip
spec:
  rules:
  - host: {{ .Values.serviceAddress }}
    http:
      paths:
      - backend:
          service:
            name: {{ .Values.serviceName }}
            port:
              number: {{ .Values.sourcePort }}
        path: /
        pathType: Prefix
solomem commented 1 year ago

kubeconfig

Kubernetes uses a file called a **kubeconfig** to know about the clusters that it can access. By default, kubeconfigs are located in a file called config, which is located inside of the .kube directory, inside of your home directory.

vim .kube/config

Image

solomem commented 1 year ago

EKS

Image

Image

change context:

kubectl config set-context kind-explorecalifornia.com

in the EKS, the kubelets/ kubernetes nodes live in the namespace kube-system

Image

solomem commented 1 year ago

eksctl

eksctl v0.124.0 [Approved] eksctl package files install completed. Performing other installation steps. eksctl is going to be installed in 'C:\ProgramData\chocolatey\lib\eksctl\tools' Downloading eksctl 64 bit from 'https://github.com/weaveworks/eksctl/releases/download/v0.124.0/eksctl_Windows_amd64.zip' Progress: 100% - Completed download of C:\Users\Ke.Shi\AppData\Local\Temp\chocolatey\eksctl\0.124.0\eksctl_Windows_amd64.zip (30.21 MB). Download of eksctl_Windows_amd64.zip (30.21 MB) completed. Hashes match. Extracting C:\Users\Ke.Shi\AppData\Local\Temp\chocolatey\eksctl\0.124.0\eksctl_Windows_amd64.zip to C:\ProgramData\chocolatey\lib\eksctl\tools... C:\ProgramData\chocolatey\lib\eksctl\tools ShimGen has successfully created a shim for eksctl.exe The install of eksctl was successful. Software installed to 'C:\ProgramData\chocolatey\lib\eksctl\tools'

Chocolatey installed 1/1 packages. See the log for details (C:\ProgramData\chocolatey\logs\chocolatey.log).

Enjoy using Chocolatey? Explore more amazing features to take your experience to the next level at https://chocolatey.org/compare (base) PS C:\WINDOWS\system32> eksctl version 0.124.0

solomem commented 1 year ago

ECR

registory="197605493344.dkr.ecr.us-west-2.amazonaws.com/explore-california"

solomem commented 1 year ago

Create EKS cluster

  1. Create IAM role for EKS cluster EKS > EKS Cluster case in IAM

  2. Create Dedicated VPC for the EKS cluster Use cfn: https://docs.aws.amazon.com/eks/latest/userguide/creating-a-vpc.html

  3. Create EKS cluster Cluster endpoint access:

    • Public: means the cluster endpoint, which means the cluster API is accessible from outside of VPC, but the work node traffic will also allow your VPC to conncet to the endpoint
    • public and private Only the cluste is accessible from the endpoint, but the work node will not leave the VPC
    • private VPC created dedicated for the cluster, only within that you can access the cluster

API server endpoint OpenID Connect provider URL Certificate authority

  1. Install & setup IAM authenticator and kubectl utility aws iam list-users aws sts get-caller-identity Install aws-iam-authenticator Install kubectl aws aws eks --region us-west-2 update-kubeconfig --name <cluster name> This will add the config to the .kubeconfig export KUBECONFIG=~/.kube/config kubectl get svc kubectl get nodes

  2. Create IAM role for EKS nodes IAM > service EC2, and attach policies:

    • AmazonEKS_CNI_Policy
    • Amazoneksworknodepolicy
    • amazonec2containerregistryreadonly
  3. Create work nodes EKS > Compute Create node group kubectl get node --watch

  4. Deploying Demo Application aws get deploy Github: learnitguide/kubernetes-knote.git 2 Tier app, front and scale load, and mango DB Use loadbalancer type for Service (knote) kubectl apply -f mongo.yaml kubectl apply -f knote.yaml kubectl get svc kubectl get pods -o wide kubectl get svc nslookup > will get multiple ip address Now we can access the cluster from internet

solomem commented 1 year ago

Deploying the EKS cluster step by step:

Step 1: Create S3 bucket and state prefix

solomem@AUBNEWL02519:~/devops/eks/07_03_after$ 2>/dev/null aws s3 ls "$TERRAFORM_S3_BUCKET"
                           PRE state/

Step 2: Terraform init

solomem@AUBNEWL02519:~/devops/eks/07_03_after$ docker run --rm -e "AWS_ACCESS_KEY_ID=$AWS_ACCESS_KEY_ID" -e "AWS_SECRET_ACCESS_KEY=$AWS_SECRET_ACCESS_KEY" -e "AWS_SESSION_TOKEN=${AWS_SESSION_TOKEN:-""}" -e "AWS_REGION=$AWS_REGION" -e "TF_IN_AUTOMATION=true" -v "$HOME/.kube:/root/.kube" -v "$PWD:/work" -w /work "$TERRAFORM_DOCKER_IMAGE" init -backend-config="bucket=$TERRAFORM_S3_BUCKET" -backend-config="key=$TERRAFORM_S3_KEY"

Terraform initialized in an empty directory!

Step 3: create_cluster

Step 4: set_eks_context

Step 5: install_aws_spot_termination_handler

Step 6: install_alb_ingress_controller

Step 7: install_vpc_cni

Step 8: Update the ~/.kube/config` file

aws eks update-kubeconfig --name explore-california-cluster < this will ensure the apiVersion the same with the awscli version

solomem commented 1 year ago

Deploying the EKS cluster step by step:

Step 1: Create S3 bucket and state prefix

solomem@AUBNEWL02519:~/devops/eks/07_03_after$ 2>/dev/null aws s3 ls "$TERRAFORM_S3_BUCKET"
                           PRE state/

Step 2: Terraform init

solomem@AUBNEWL02519:~/devops/eks/07_03_after$ docker run --rm -e "AWS_ACCESS_KEY_ID=$AWS_ACCESS_KEY_ID" -e "AWS_SECRET_ACCESS_KEY=$AWS_SECRET_ACCESS_KEY" -e "AWS_SESSION_TOKEN=${AWS_SESSION_TOKEN:-""}" -e "AWS_REGION=$AWS_REGION" -e "TF_IN_AUTOMATION=true" -v "$HOME/.kube:/root/.kube" -v "$PWD:/work" -w /work "$TERRAFORM_DOCKER_IMAGE" init -backend-config="bucket=$TERRAFORM_S3_BUCKET" -backend-config="key=$TERRAFORM_S3_KEY"

Terraform initialized in an empty directory!

Step 3: create_cluster

Step 4: set_eks_context

Step 5: install_aws_spot_termination_handler

Step 6: install_alb_ingress_controller

Step 7: install_vpc_cni

solomem commented 1 year ago

debug esk deployment

look into the terraform container

docker run --rm -it --entrypoint /bin/bash $TERRAFORM_DOCKER_IMAGE

do terraform manually

docker run --rm -it -e "AWS_ACCESS_KEY_ID=$AWS_ACCESS_KEY_ID" \
-e "AWS_SECRET_ACCESS_KEY=$AWS_SECRET_ACCESS_KEY" \
-e "AWS_SESSION_TOKEN=${AWS_SESSION_TOKEN:-""}" \
-e "AWS_REGION=$AWS_REGION" \
-e "TF_IN_AUTOMATION=true" \
-v "$HOME/.kube:/root/.kube" \
-v "$PWD:/work" -w /work --entrypoint /bin/bash "$TERRAFORM_DOCKER_IMAGE"
terraform init \
  -backend-config="bucket=$TERRAFORM_S3_BUCKET" \
  -backend-config="key=$TERRAFORM_S3_KEY"