Open solomem opened 1 year ago
Docker then uses container D
to create containers mounted from the images by running Docker run in that same terminal window.
The smallest unit of compute and Kubernetes is called the pod
. Kubernetes pods group similar containers into a logical unit. Containers inside of Kubernetes pods have the same IP address and move quote unquote together from host to host. Pods get scheduled onto Kubernetes Cubelets or nodes, which you can see an example of in the center of your screen in the dashed box. Cubelets
like these are machines that host pods in their containers and configure an aggregate between them. While you can create pods directly, you won't want to do this most of the time. Instead, you'll want to create them through resources called deployments. Kubernetes deployments provide a desired state for pods in an application.
With deployments
, you can easily scale the number of pods as needed by your application either manually or automatically. You can also control how new pods join your deployment, which is really helpful while releasing new versions of applications.
Pods within deployments otherwise are discovered inside of Kubernetes networks through the resources called services
. Services provide a single IP address
and a DNS record for a group of related pods
. Like pods, IP addresses for services can be reached by any container in a Kubernetes cluster. However, unlike pods, services give developers more control over when and how those IP addresses become available. Kubernetes can even attach load balancers
provided by cloud providers to these services, which makes it even easier for application pods to be reached throughout your cluster and from the outside world.
Speaking of routine applications on Kubernetes from the outside world, Kubernetes provides really convenient objects called ingresses
for explicitly this. Ingresses are HTTP reversed proxies
that allow the outside world to reach Kubernetes pods through one or more routing rules
. These rules are created and managed through things called ingress controllers
.
Example:
Here's an example of how powerful Kubernetes can be in this regard. Let's say we wanted to create a forum for Explore California at explorecalifornia.com/forum so that customers can discuss cool things that they've experienced thanks to using Explorer California. The software we use to power the forums will probably be completely different from our main website. Pre-Kubernetes, you'd have to configure a web server to create two V hosts or virtual hosts and use service IDRL we write rules to handle this. It's possible, but it's a pain. With Kubernetes, our ingress can use those same rewrite rules to point to two completely separate pods like you can see here above the ingress box. This is really nice when you actually use it and we will. There are many more objects that Kubernetes uses to achieve planetary scale container orchestration, but to save time, we won't cover them all. You can learn about the wide world of the Kubernetes objects by reading the Kubernetes documentation at https://Kubernetes.io.
The nginx.conf
already privodes the entrypoint so we donnot have to specify in the Dockerfile.
docker build -t explorecalifornia.com .
docker run --rm --name explorecalifornia.com -p 5000:80 explorecalifornia.com
rule (target): (target_a, target_b, target_c: prerequists)
receipt (indentations)
Example:
#!/usr/bin/env make
run_website: docker build -t explorecalifornia.com . && \ docker run -p 5000:80 -d --name explorecalifornia.com --rm explorecalifornia.com
------
## run make
`maker run_website`
## `Phony` targets ensure that Make rules that do not write files of the same name execute regardless of whether they exist or not.
Docker is the only application needed to run this.
https://kind.sigs.k8s.io/docs/user/quick-start/#installation
in linux, download the kind
curl --location --output ./kind https:......
Example Makefile
#!/usr/bin/env make
.PHONY: run_website install_kind
run_website:
docker build -t explorecalifornia.com . && \
docker run -p 5000:80 -d --name explorecalifornia.com --rm explorecalifornia.com
install_kind:
curl --location -o ./kind https://github.com/kubernetes-sigs/kind/releases/download/v0.17.0/kind-windows-amd64 && \
./kind --version
run
make install_kind
kind create cluster --name <name_of_cluster>
./kind delete cluster --name explorecalifornia.com
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
explorecalifornia.com-control-plane Ready control-plane 3m25s v1.25.3
resources and Kubernetes can be separated into things called namespaces. Namespaces allow you to organize, categorize, and secure Kubernetes resources between each other. This is useful if you have many different suites of applications running inside of a single cluster. Let's see if the pods running inside a Kubernetes "system," quote, unquote, namespace are healthy. To do this we can run kubectl, get pods, and then we can append dash N, which is short for namespace, and then we can type kube system. So this gives us a list of pods that are running inside of the kube system namespace. As we can see, since they're all in the running state, then that means that all of the pods that we need to run kind are happy and healthy.
kube-system
namespace$ kubectl get pods --namespace kube-system
NAME READY STATUS RESTARTS AGEcoredns-565d847f94-4jbvv 1/1 Running 0 6m55s
coredns-565d847f94-dgddz 1/1 Running 0 6m55s
etcd-explorecalifornia.com-control-plane 1/1 Running 0 7m8s
kindnet-6xqsb 1/1 Running 0 6m55s
kube-apiserver-explorecalifornia.com-control-plane 1/1 Running 0 7m8s
kube-controller-manager-explorecalifornia.com-control-plane 1/1 Running 0 7m8s
kube-proxy-zxbpj 1/1 Running 0 6m55s
kube-scheduler-explorecalifornia.com-control-plane 1/1 Running 0 7m8s
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
explorecalifornia.com-control-plane Ready control-plane 3m25s v1.25.3
-n change namespace
The relationship between Kind and the Docker Registry? You must create a local Docker registry and link it to the same network as your kind cluster to run local images.
registry:2
image, and run it:docker run --name local-registry -d --restart=always -p 5000:5000 registry:2
To test if the registry image is working?
curl --location http://localhost:/v2
Earlier we mentioned that the From image used by our Dockerfile is sourced from a registry of images called the Docker hub. I also mentioned that Docker images can be stored in private registries. When we run our app inside of Kubernetes, Kubernetes is going to expect that our application comes with a pre-built docker image. While we were building our docker image locally to test our website inside a Docker, Kubernetes doesn't know about the existence of local images. Additionally, Kubernetes doesn't have a built-in way of taking Dockerfiles and turning them into docker images for you. One way of working around this is by pushing our local image into the Docker hub. Anyone can do it as long as they have a Docker hub account, which is free. However, there are two problems with this. First, the Docker hub isn't always the best place to put images. Some companies like Explore California, want to keep their images private. The last thing we want is for explore (indistinct), to learn even more about our cool tracks. Secondly, Docker has restrictions in place that cap the number of docker images pull from Docker hub per hour. This would hinder our testing significantly. An easier way of working around this is by creating our own Docker registry locally. Since Docker makes the docker image for running Docker registries publicly available, we can easily use that to create one locally that we can push our image into. Moreover, we can configure kind to use this registry so that Kubernetes does become aware of our local Docker images. Enough talking, let's do it. Let's keep our make file open in a separate window so that we can update it as we run these commands. Starting the registry is actually really easy. All you have to do is run this command - Docker run -- name local registry - D -- restart set to always -P5000:5000 and then registry:2. So what does this command actually doing? Well, let's take a closer look. The first thing we do is we create a container whose name is called local registry. Then we send that container to the background with -D so that we're not blocked by any logs. Additionally, we set the restart property for this container to always, so that in case the registry breaks for any reason, Docker will just restart the container. Then, we map the containers, port 5000 to our hosts on port 5000 as well. And finally, we use the registry Docker image on the Docker hub, and we're using version 2 of the Docker registry as there are two versions. In order to specify that, we separate the name of the container image with the name of the tag by colon. So now that we know what this command is doing, let's test that the Docker registry is actually working. To do that, I am going to use the curl command once again, and I'm going to type http://localhost:5000/v2. So it looks like we got (indistinct) permanently message when we did that. This is actually a perfect example of how --location helps here. So let's try re-running this command, but let's put --location right before the URL. As you can see, and it might be a little bit hard to see, we got a blank JSON object here, which means that we don't have any images in our registry just yet, but it does mean that it's working.
if docker ps | grep -q 'local-registry'; \
then echo "---> local-registry already created; skipping"; \
else docker run --name local-registry -d --restart=always -p 5000:5000 registry:2; \
fi
Kind Configuration files are used to configure the various sub-components inside of our kind clusters. The configuration file
that we're going to write is going to tell the container run time inside of our kind cluster about the new registry that we just created.
This uses the kubenetes manifest format
.
Kubernetes manifests are files that are used to install and configure objects with a kubernetes cluster
Every Kubernetes manifests starts by describing the kind of object we're trying to install or configure. This is described by the kind line online one.
Kube ADM
Kind uses a tool called Kube ADM
to create the cluster that we'll use here. Kube ADM also configures the container runtime use within Kubernetes to create, manage and delete clusters
.
Kind uses container D
for its container run time.
Container D config patches
on line three, provides configuration
changes for container D that Kube ADM will need to be aware of when we're creating our cluster`.
In our case, we want to tell container D, "Hey container D we created a Docker registry called local registry. You can also reach it at local host on port 5,000." You should really know about it. That's done on lines five and six right here.
kind_config.yaml
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
containerdConfigPatches:
- |-
[plugins."io.containerd.grpc.v1.cri".registry.mirrors."localhost:5000"]
endpoint = ["http://local-registry:5000"]
nodes:
- role: control-plane
kubeadmConfigPatches:
- |
kind: InitConfiguration
nodeRegistration:
kubeletExtraArgs:
node-labels: "ingress-ready=true"
extraPortMappings:
- containerPort: 80
hostPort: 80
protocol: TCP
- containerPort: 443
hostPort: 443
protocol: TCP
kind_configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: local-registry-hosting
namespace: kube-public
data:
localRegistryHosting.v1: |
host: "localhost:${reg_port}"
help: "https://kind.sigs.k8s.io/docs/user/local-registry/"
https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.22/
config map
, a native Kubernetes object
.
Config maps and Kubernetes are used to provide configuration data into pods running inside of Kubernetes
. configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: local-registry-hosting
namespace: kube-public
data:
localRegistryHosting.v1: |
host: "localhost:${reg_port}"
help: "https://kind.sigs.k8s.io/docs/user/local-registry/"
curl
You've executed the following command:
> curl --output foo.txt https://example.com/document.txt
Which of these describes what happens next most accurately?
document.txt will be written to foo.txt, but only if https://example.com/document.txt does not contain any redirects.
document.txt at example.com will get written to foo.txt
because of the --output
flag, but you'll need to make sure that --location
or -L
is specified first to account for any redirects.
Containerd
is the runtime that Kubernetes
, and subsequently, kind
, uses to create and manage containers. runc is used explicitly to start containers.
While Kind creates Kubernetes clusters that could be used for actual Kubernetes workloads, it is a stripped-down Kubernetes distribution that is not suitable for production workloads.
Kind creates Kubernetes clusters within Docker. Kind is not suitable for creating bare-metal Kubernetes clusters.
Kind allows you to create Kubernetes clusters within Docker for quick testing and prototyping.
Create deployment. Deployments will make sure that the number of pods we want hosting our website, stay up and running at all times
Kubernetes creates resources through YAML files called Kubernetes manifests. You can think of them as an application describing what you're looking to create and what the thing you're looking to create looks like.
kubectl create deployment --dry-run=client/server --image localhost:5000/<image_name> <name_of_deployment> --output=yaml
kubectl create deployment --dry-run=client --image localhost:5000/explorecalifornia.com explorecalifornia.com --output=yaml > deployment.yaml
disable the IIS in windows
create docker registry + create the kind cluster
make create_kind_cluster_step1
tag the docker image + push docker image to the local docker registery:
docker push localhost:5000/explorecalifornia.com
Create network btw kind and docker registry:
docker network connect kind local-registry
(make create_kind_cluster_step2
)
Create a deployment
kubectl apply -f deployment.yaml
Verify the deployment use selector to see all the pods associated with the deployment:
$ kubectl get pods -l app=explorecalifornia.com
NAME READY STATUS RESTARTS AGE
explorecalifornia.com-64bc45ddc7-2nv5s 1/1 Running 0 2m39s
Ports forwarding This mounts the ports inside your machine to the ports inside the pods of your deployment outside port:inside port
$ kubectl port-forward deployment/explorecalifornia.com 8080:80
Forwarding from 127.0.0.1:8080 -> 80
Forwarding from [::1]:8080 -> 80
Service
kubectl create service clusterip --dry-run=client --tcp=80:80 explorecalifornia.com --output=yaml > service.yaml
kubectl apply -f service.yaml
double check
kubectl get all -l app=explorecalifornia.com
port forward:
kubectl port-forward service/explorecalifornia-svc 8080:80
ingress
kubectl create ingress explorecalifornia.com --rule="explorecalifornia.com/=explorecalifornia-svc:80" --dry-run=client --output=yaml > ingress.yaml
install kind ingress controller
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/main/deploy/static/provider/kind/deploy.yaml
Verification of ingress controller
kubectl get all -n ingress-nginx
deploy ingress
kubectl apply -f ingress.yaml
Service
in Kubernetesservice.yaml
apiVersion: v1
kind: Service
metadata:
labels:
app: explorecalifornia.com
name: explorecalifornia-svc
spec:
ports:
- name: 80-80
port: 80
protocol: TCP
targetPort: 80
selector:
app: explorecalifornia.com
type: ClusterIP
status:
loadBalancer: {}
kubectl create service clusterip --dry-run=client --tcp=80:80 explorecalifornia.com --output=yaml > service.yaml
kubectl apply -f service.yaml
will get the error message:
$ kubectl apply -f service.yaml
The Service "explorecalifornia.com" is invalid: metadata.name: Invalid value: "explorecalifornia.com": a DNS-1035 label must consist of lower case alphanumeric characters or '-', start with an alphabetic character, and end with an alphanumeric character (e.g. 'my-name', or 'abc-123', regex used for validation
is '[a-z]([-a-z0-9]*[a-z0-9])?')
We have to change the metadata:name to explorecalifornia-svc
run:
kubectl apply -f service.yaml
run to check:
$ kubectl get all -l app=explorecalifornia.com
NAME READY STATUS RESTARTS AGE
pod/explorecalifornia.com-64bc45ddc7-dj9c8 1/1 Running 0 38m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/explorecalifornia-svc ClusterIP 10.96.85.93 <none> 80/TCP 35m
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/explorecalifornia.com 1/1 1 1 38m
NAME DESIRED CURRENT READY AGE
replicaset.apps/explorecalifornia.com-64bc45ddc7 1 1 1 38m
The clusterip is created, and the port is mapped to it.
$ kubectl port-forward service/explorecalifornia-svc 8080:80
Forwarding from 127.0.0.1:8080 -> 80
Forwarding from [::1]:8080 -> 80
in Kubernetes, an Ingress is a reverse proxy
that enables external access into other Kubernetes resources. If you've ever used an application load balancer like AWS Application Load Balancer, very appropriately named,
Azure API Gateway` or Google's Cloud Load Balancer, or if you've ever used a "Bare Metal Load Balancer" like F5's BIG-IP, then the Kubernetes Ingress will probably be familiar to you.
What we're doing with Explore California and NginX is actually very, very similar to how Ingresses work here.
Here's a great diagram from the Kubernetes documentation that explains how Ingresses work.
routing rules
perform an action when given a name. In the case of our Ingress, instead of using a make target, we provide the HTTP path like slash or slash shopping
. action
is to send traffic
to a port for a specified Kubernetes service writing rules allow us to send requests to multiple different pods without having to create separate DNS records for them, which is really nice. Let's use a slightly tweaked version of this image to explain what I mean. So let's say that Explore California wanted to move both our website
and ourbooking service
, a separate application into Kubernetes. Instead of having to maintain a separate complicated nginx configuration
to send requests between the two services, we can create an Ingress
with two routing rules
That looks something like what you see here on the right. This isn't the exact syntax we'll use in our Ingress, but I hope the concept and the simplicity is clear.
The rules in our Ingress are processed by another Kubernetes object called an Ingress controller
. The Ingress controller is essentially an instance of nginX doing the complicated routing
for us.
A Kubernetes cluster
can have many and multiple different kinds of Ingress controllers
. Each have their advantages and disadvantages.
Paths, and Ingress rules don't have to be exact either. You can specify roles with many different kinds of patterns
, even regexes or regular expressions work.
We'll use the prefix matching rules
for our Ingress to keep things simple though, since we're already using nginx, we're going to go ahead and use the nginx Ingress controller for explore California. That's a really brief overview of how Ingresses work. So now that we know a little bit more about them, let's go ahead and create one.
To get the help page:
kuberctc create ingress --help
Create ingress rule:
kubectl create ingress explorecalifornia.com --rule="explorecalifornia.com/=explorecalifornia-svc:80" --dry-run=client --output=yaml > ingress.yaml
Change the pathType: Prefix
and remove status
Ingress controller in kind
https://kind.sigs.k8s.io/docs/user/ingress/#setting-up-an-ingress-controller
! making sure the zscaler is off!
$ kubectl get all -n ingress-nginx
NAME READY STATUS RESTARTS AGE
pod/ingress-nginx-admission-create-4x92q 0/1 Completed 0 80s
pod/ingress-nginx-admission-patch-7wkbh 0/1 Completed 0 80s
pod/ingress-nginx-controller-6bccc5966-ww7tw 1/1 Running 0 80s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S)
AGE
service/ingress-nginx-controller NodePort 10.96.95.135 <none> 80:30472/TCP,443:31539/TCP 80s
service/ingress-nginx-controller-admission ClusterIP 10.96.97.117 <none> 443/TCP
80s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/ingress-nginx-controller 1/1 1 1 80s
NAME DESIRED CURRENT READY AGE
replicaset.apps/ingress-nginx-controller-6bccc5966 1 1 1 80s
NAME COMPLETIONS DURATION AGE
job.batch/ingress-nginx-admission-create 1/1 14s 80s
job.batch/ingress-nginx-admission-patch 1/1 15s 80s
Install the ingress controler:
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/main/deploy/static/provider/kind/deploy.yaml
kubectl wait --namespace ingress-nginx \
--for=condition=ready pod \
--selector=app.kubernetes.io/component=controller \
--timeout=90s
ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: explorecalifornia.com
spec:
rules:
- host: explorecalifornia.com
http:
paths:
- backend:
service:
name: explorecalifornia-svc
port:
number: 80
path: /
pathType: Prefix
When we go to host explorecalifornia.com
at /
, all the traffic go to explorecalifornia-svc
DNS mapping
$ kubectl apply -f ingress.yaml
ingress.networking.k8s.io/explorecalifornia.com created
visit: explorecalifornia.com
etc/hosts
Why must one modify /etc/hosts to test an Ingress locally when using the NGINX ingress controller?
The NGINX ingress controller matches on the "host" provided within the Ingress. Consequently, connecting from "localhost" would not be accepted by the Ingress.
Example:
rules:
- host: explorecalifornia.com
paths:
- backend:
service:
name: explorecaliforniacheckout.com
port: 5000
path: '/checkout'
pathType: Prefix
- backend:
service:
name: explorecaliforniaanalytics.com
port: 5000
path: '/clicktrack'
pathType: Prefix
Services cannot have DNS record labels in them. Rename them so that they do not look like DNS records.
explorecaliforniacheckout.com
>explorecaliforniacheckout-svc
You're creating a website. Its Pod will be created through Deployment "my_deployment". What is the easiest way to test that is working locally inside of Kubernetes?
Use kubectl port-forward deployment/my_deployment 8080:80. Open a web browser. Visit localhost:8080
.
According to the Kubernetes API Reference Docs, Deployment inside of apps/v1 needs to look like this:
apiVersion: [VERSION]
kind: Deployment
metadata: [ObjectMeta object]
spec: [DeploymentSpec]
Furthermore, the ObjectMeta object does not have a key for "spec". Finally, we can see that "spec" is actually it's own key.
kubectl apply --dry-run=client
will generate a manifest YAML or JSON based on the API information stored on the client. While --dry-run=server
is more accurate since it takes your Kubernetes server into account, this is not desirable in an environment in an unreliable network.
Windows:
choco install kubernetes-helm
The install of kubernetes-helm was successful.
Software installed to 'C:\ProgramData\chocolatey\lib\kubernetes-helm\tools'
https://helm.sh/docs/chart_template_guide/function_list/
{{ .Values.foo | lower}}
{{ .Values.foo | upper }}
{{ .Values.foo | title }}
...
MAJOR:MINOR:PATCH 1.0.0
kind
and docker registry
create_kind_cluster_step1
docker push localhost:5000/explorecalifornia.com
create_kind_cluster_step2
# connect kind with docker registry
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/main/deploy/static/provider/kind/deploy.yaml
WAIT FOR 5 MINS
kubectl apply -f ingress.yaml
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
explorecalifornia.com-control-plane Ready control-plane 5h13m v1.25.3
Make sure the ingress controller container is up and running.
$ kubectl get all -n ingress-nginx
NAME READY STATUS RESTARTS AGE
pod/ingress-nginx-admission-create-xx5z5 0/1 Completed 0 57s
pod/ingress-nginx-admission-patch-dqvb6 0/1 Completed 0 57s
pod/ingress-nginx-controller-6bccc5966-vvs4p 1/1 Running 0 57s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S)
AGE
service/ingress-nginx-controller NodePort 10.96.125.239
NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/ingress-nginx-controller 1/1 1 1 58s
NAME DESIRED CURRENT READY AGE replicaset.apps/ingress-nginx-controller-6bccc5966 1 1 1 58s
NAME COMPLETIONS DURATION AGE job.batch/ingress-nginx-admission-create 1/1 14s 58s job.batch/ingress-nginx-admission-patch 1/1 15s 58s
## Create Helm Template:
1. Create directory `chart` in the project dir
2. Create `Chart.yaml` file with Capital `C`
3. Create 'values.yaml` file with lowercase `v`
4. Validate the helm file
`helm show all ./chart`
5. Create `template.yaml`
6. Move deployment, service and ingress.yamls to `chart/templates`
`mv {deployment,service,ingress}.yaml ./chart/templates`
7. Render template:
- run to validate that your values were properly accepted, helm template is used to render templates inside of a chart.
`helm template ./char`
![Image](https://user-images.githubusercontent.com/44691256/209470551-f30c9495-55b7-4433-bd29-d0889d579db6.png)
9. deploy helm template
- **Helm upgrade is preferred over helm install when we want to install an application over itself through multiple revisions.**
- Helm install only allows you to install an application once. To upgrade it, you would need to run "helm uninstall". Helm upgrade fixes this problem.
`helm upgrade --atomic --install explore-california-website ./chart`
`helm uninstall explore-california-website`
`helm install explore-california-website ./chart`
$ helm status explore-california-website NAME: explore-california-website LAST DEPLOYED: Mon Dec 26 23:13:00 2022 NAMESPACE: default STATUS: deployed REVISION: 1 TEST SUITE: None
Once the helm is deployed:
$ kubectl get all NAME READY STATUS RESTARTS AGE pod/explore-california-website-8587cd96b5-6t2nv 1/1 Running 0 4h29m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/explorecalifornia-svc ClusterIP 10.96.14.127
NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/explore-california-website 1/1 1 1 4h29m
NAME DESIRED CURRENT READY AGE replicaset.apps/explore-california-website-8587cd96b5 1 1 1 4h29m
kubectl delete all -l app=explorecalifornia.com
old ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: {{ .Values.appName }}
annotations:
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/target-type: ip
spec:
rules:
- host: {{ .Values.serviceAddress }}
http:
paths:
- backend:
service:
name: {{ .Values.serviceName }}
port:
number: {{ .Values.sourcePort }}
path: /
pathType: Prefix
Kubernetes uses a file called a **kubeconfig**
to know about the clusters that it can access. By default, kubeconfigs are located in a file called config
, which is located inside of the .kube
directory, inside of your home directory.
vim .kube/config
clusters
The first important key is the clusters
key, which you can see on line 2. This is information about our Kubernetes clusters, as you would imagine. Since we've only been using the local cluster that Kind creative for us, we only have one cluster in here. However, we can have as many clusters as you'd like to access in here.
contexts
The next thing to note are contexts, which start on line seven. kubeconfig contexts allow you to run commands against different clusters, and as different users, using a quote-unquote alias
. You can do this by providing the dash-dash context
switch for just about any cubectl
command. This is a lot easier than having to switch between clusters to run simple commands. We'll see an example of this when we deploy things into our EKS cluster.
current contexts
You can also see a key here called "current contexts" on line 12. That's the default context
that all of your commands will run in, and you can change this with the kubectl config set -context
command.
users
Finally, kubeconfig stores users, which you can see on line 15 and down. Users define the list of usernames, and their authentication data. Kubernetes supports multiple different ways of logging into clusters. You can log in with passwords, OAuth, JSON, web tokens, or JOTS or JWTs, certificates, and more. Most clusters are set up so that users can log in with certificates. Here, client certificate data, on line 18, provides a base 64 encoded version of the certificate to use for HTTPS connections made with the cluster
, and client key data
on line 19, provides a base 64 encoded version of a private key to present to the Kubernetes server. An important thing to note is that these aren't unquote
, users inside of Kubernetes itself, you can think of users
in a kubeconfigs
as shortcuts for tokens, private keys, et cetera. Kubernetes actually doesn't store regular users. It assumes that your users are stored somewhere else, and that you'll be using a third party service
to authenticate into Kubernetes.
aws sts assume-role --role-arn arn:aws:iam::197605493344:role/admin-access-explore-aks --role-session-name MySession --external-id explore1770 --profile solomemaks
export AWS_ACCESS_KEY_ID=
export AWS_SECRET_ACCESS_KEY=
export AWS_SESSION_TOKEN=
aws eks update-kubeconfig --name explore-california-cluster
kubectl config set-context kind-explorecalifornia.com
kube-system
(base) PS C:\WINDOWS\system32> choco install -y eksctl
Chocolatey v1.2.0
Installing the following packages:
eksctl
By installing, you accept licenses for the packages.
Progress: Downloading eksctl 0.124.0... 100%
eksctl v0.124.0 [Approved] eksctl package files install completed. Performing other installation steps. eksctl is going to be installed in 'C:\ProgramData\chocolatey\lib\eksctl\tools' Downloading eksctl 64 bit from 'https://github.com/weaveworks/eksctl/releases/download/v0.124.0/eksctl_Windows_amd64.zip' Progress: 100% - Completed download of C:\Users\Ke.Shi\AppData\Local\Temp\chocolatey\eksctl\0.124.0\eksctl_Windows_amd64.zip (30.21 MB). Download of eksctl_Windows_amd64.zip (30.21 MB) completed. Hashes match. Extracting C:\Users\Ke.Shi\AppData\Local\Temp\chocolatey\eksctl\0.124.0\eksctl_Windows_amd64.zip to C:\ProgramData\chocolatey\lib\eksctl\tools... C:\ProgramData\chocolatey\lib\eksctl\tools ShimGen has successfully created a shim for eksctl.exe The install of eksctl was successful. Software installed to 'C:\ProgramData\chocolatey\lib\eksctl\tools'
Chocolatey installed 1/1 packages. See the log for details (C:\ProgramData\chocolatey\logs\chocolatey.log).
Enjoy using Chocolatey? Explore more amazing features to take your experience to the next level at https://chocolatey.org/compare (base) PS C:\WINDOWS\system32> eksctl version 0.124.0
aws ecr describe-repositories
> get the repositoryUri
{
"repositoryArn": "arn:aws:ecr:us-west-2:197605493344:repository/explore-california",
"registryId": "197605493344",
"repositoryName": "explore-california",
"repositoryUri": "197605493344.dkr.ecr.us-west-2.amazonaws.com/explore-california",
"createdAt": "2022-12-28T09:27:57+10:00",
"imageTagMutability": "MUTABLE",
"imageScanningConfiguration": {
"scanOnPush": false
},
"encryptionConfiguration": {
"encryptionType": "AES256"
}
}
registory="197605493344.dkr.ecr.us-west-2.amazonaws.com/explore-california"
Create IAM role for EKS cluster EKS > EKS Cluster case in IAM
Create Dedicated VPC for the EKS cluster Use cfn: https://docs.aws.amazon.com/eks/latest/userguide/creating-a-vpc.html
Create EKS cluster Cluster endpoint access:
API server endpoint OpenID Connect provider URL Certificate authority
Install & setup IAM authenticator and kubectl utility
aws iam list-users
aws sts get-caller-identity
Install aws-iam-authenticator
Install kubectl aws
aws eks --region us-west-2 update-kubeconfig --name <cluster name>
This will add the config to the .kubeconfig
export KUBECONFIG=~/.kube/config
kubectl get svc
kubectl get nodes
Create IAM role for EKS nodes IAM > service EC2, and attach policies:
Create work nodes
EKS > Compute
Create node group
kubectl get node --watch
Deploying Demo Application
aws get deploy
Github: learnitguide/kubernetes-knote.git
2 Tier app, front and scale load, and mango DB
Use loadbalancer type for Service (knote)
kubectl apply -f mongo.yaml
kubectl apply -f knote.yaml
kubectl get svc
kubectl get pods -o wide
kubectl get svc
nslookup
state
prefixsolomem@AUBNEWL02519:~/devops/eks/07_03_after$ 2>/dev/null aws s3 ls "$TERRAFORM_S3_BUCKET"
PRE state/
solomem@AUBNEWL02519:~/devops/eks/07_03_after$ docker run --rm -e "AWS_ACCESS_KEY_ID=$AWS_ACCESS_KEY_ID" -e "AWS_SECRET_ACCESS_KEY=$AWS_SECRET_ACCESS_KEY" -e "AWS_SESSION_TOKEN=${AWS_SESSION_TOKEN:-""}" -e "AWS_REGION=$AWS_REGION" -e "TF_IN_AUTOMATION=true" -v "$HOME/.kube:/root/.kube" -v "$PWD:/work" -w /work "$TERRAFORM_DOCKER_IMAGE" init -backend-config="bucket=$TERRAFORM_S3_BUCKET" -backend-config="key=$TERRAFORM_S3_KEY"
Terraform initialized in an empty directory!
aws eks update-kubeconfig --name explore-california-cluster
< this will ensure the apiVersion the same with the awscli version
state
prefixsolomem@AUBNEWL02519:~/devops/eks/07_03_after$ 2>/dev/null aws s3 ls "$TERRAFORM_S3_BUCKET"
PRE state/
solomem@AUBNEWL02519:~/devops/eks/07_03_after$ docker run --rm -e "AWS_ACCESS_KEY_ID=$AWS_ACCESS_KEY_ID" -e "AWS_SECRET_ACCESS_KEY=$AWS_SECRET_ACCESS_KEY" -e "AWS_SESSION_TOKEN=${AWS_SESSION_TOKEN:-""}" -e "AWS_REGION=$AWS_REGION" -e "TF_IN_AUTOMATION=true" -v "$HOME/.kube:/root/.kube" -v "$PWD:/work" -w /work "$TERRAFORM_DOCKER_IMAGE" init -backend-config="bucket=$TERRAFORM_S3_BUCKET" -backend-config="key=$TERRAFORM_S3_KEY"
Terraform initialized in an empty directory!
docker run --rm -it --entrypoint /bin/bash $TERRAFORM_DOCKER_IMAGE
docker run --rm -it -e "AWS_ACCESS_KEY_ID=$AWS_ACCESS_KEY_ID" \
-e "AWS_SECRET_ACCESS_KEY=$AWS_SECRET_ACCESS_KEY" \
-e "AWS_SESSION_TOKEN=${AWS_SESSION_TOKEN:-""}" \
-e "AWS_REGION=$AWS_REGION" \
-e "TF_IN_AUTOMATION=true" \
-v "$HOME/.kube:/root/.kube" \
-v "$PWD:/work" -w /work --entrypoint /bin/bash "$TERRAFORM_DOCKER_IMAGE"
terraform init \
-backend-config="bucket=$TERRAFORM_S3_BUCKET" \
-backend-config="key=$TERRAFORM_S3_KEY"
Pre-requisite: AWS Account and CLI setup
admin-access-explore-aks
arn:aws:iam::197605493344:role/admin-access-explore-aks
aws sts assume-role --role-arn arn:aws:iam::197605493344:role/admin-access-explore-aks --role-session-name MySession --external-id explore1770 --profile solomemaks
AWS_ACCESS_KEY
Get the AWS credential