Jymit / CheatSheet

notes
2 stars 0 forks source link

K8s #2

Closed Jymit closed 5 years ago

Jymit commented 6 years ago

index.html

Hello world!
Jymit commented 6 years ago

my-app.yml

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  labels:
    run: my-app
  name: my-app
spec:
  replicas: 1
  selector:
    matchLabels:
      run: my-app-exposed
  template:
    metadata:
      labels:
        run: my-app-exposed
    spec:
      containers:
      - image: localhost:5000/my-app:0.1.0
        name: my-app
        ports:
        - containerPort: 80
          protocol: TCP

---

# APP SERVICE

apiVersion: v1
kind: Service
metadata:
  labels:
    run: my-app
  name: my-app
spec:
  ports:
  - port: 80
    protocol: TCP
    targetPort: 80
  selector:
    run: my-app-exposed
  type: NodePort
Jymit commented 6 years ago

index.html

Hello world!
Jymit commented 6 years ago

Dockerfile

FROM httpd:2.4-alpine

COPY ./index.html /usr/local/apache2/htdocs/
Jymit commented 6 years ago

Kubernetes (K8s) - Minikube

At the time of writing Elasticsearch 6.3.1, macOS v10.13.5

Minikube supports Kubernetes features such as:

Minikube requires that VT-x/AMD-v virtualization is enabled in BIOS. To check that this is enabled on OSX (macOS) run:

$ sysctl -a | grep machdep.cpu.features | grep VMX --color
machdep.cpu.features: FPU VME DE PSE TSC MSR PAE MCE CX8 APIC SEP MTRR PGE MCA CMOV PAT PSE36 CLFSH DS ACPI MMX FXSR SSE SSE2 SS HTT TM PBE SSE3 PCLMULQDQ DTES64 MON DSCPL VMX SMX EST TM2 SSSE3 FMA CX16 TPR PDCM SSE4.1 SSE4.2 x2APIC MOVBE POPCNT AES PCID XSAVE OSXSAVE SEGLIM64 TSCTMR AVX1.0 RDRAND F16C

Prerequisites.

$ docker-compose --version docker-compose version 1.21.1, build 5a3f1a3

$ docker-machine --version docker-machine version 0.14.0, build 89b8332

$ minikube version minikube version: v0.28.0

$ kubectl version --client Client Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.0", GitCommit:"91e7b4fd31fcd3d5f436da26c980becec37ceefe", GitTreeState:"clean", BuildDate:"2018-06-27T22:29:25Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"darwin/amd64"}

$ minikube start Starting local Kubernetes v1.10.0 cluster... Starting VM... Downloading Minikube ISO 153.08 MB / 153.08 MB [============================================] 100.00% 0s

Getting VM IP address... Moving files into cluster... Downloading kubeadm v1.10.0 Downloading kubelet v1.10.0 Finished Downloading kubelet v1.10.0 Finished Downloading kubeadm v1.10.0 Setting up certs... Connecting to cluster... Setting up kubeconfig... Starting cluster components... Kubectl is now configured to use the cluster. Loading cached images from config file.

Wanting to see the minikube dir.

$ ll ~/.minikube total 104 drwxr-xr-x 23 khondhu staff 736 12 Jul 15:36 . drwxr-xr-x+ 66 khondhu staff 2112 12 Jul 15:44 .. drwxr-xr-x 2 khondhu staff 64 12 Jul 15:34 addons -rw-r--r-- 1 khondhu staff 1298 12 Jul 15:36 apiserver.crt -rw------- 1 khondhu staff 1675 12 Jul 15:36 apiserver.key -rw-r--r-- 1 khondhu staff 1066 12 Jul 15:36 ca.crt -rw------- 1 khondhu staff 1679 12 Jul 15:36 ca.key -r----x--x 1 khondhu staff 1038 12 Jul 15:35 ca.pem drwxr-xr-x 5 khondhu staff 160 12 Jul 15:35 cache -r----x--x 1 khondhu staff 1078 12 Jul 15:35 cert.pem drwxr-xr-x 6 khondhu staff 192 12 Jul 15:34 certs -rw-r--r-- 1 khondhu staff 1103 12 Jul 15:36 client.crt -rw------- 1 khondhu staff 1679 12 Jul 15:36 client.key drwxr-xr-x 2 khondhu staff 64 12 Jul 15:34 config drwxr-xr-x 2 khondhu staff 64 12 Jul 15:34 files -r----x--x 1 khondhu staff 1675 12 Jul 15:35 key.pem drwxr-xr-x 2 khondhu staff 64 12 Jul 15:34 logs drwxr-xr-x 4 khondhu staff 128 12 Jul 15:54 machines drwx------ 3 khondhu staff 96 12 Jul 15:35 profiles -rw-r--r-- 1 khondhu staff 1074 12 Jul 15:36 proxy-client-ca.crt -rw------- 1 khondhu staff 1675 12 Jul 15:36 proxy-client-ca.key -rw-r--r-- 1 khondhu staff 1103 12 Jul 15:36 proxy-client.crt -rw------- 1 khondhu staff 1679 12 Jul 15:36 proxy-client.key

$ kubectl get nodes NAME STATUS ROLES AGE VERSION minikube Ready master 1m v1.10.0


`$ eval $(minikube docker-env)`
Add this line to .bash_profile if you want to use minikube's daemon by default (or if you do not want to set this every time you open a new terminal).
You can revert back to the host docker daemon by running:
`$ eval $(docker-machine env -u)`

$ eval $(minikube docker-env)

$ docker ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES a681f2efe157 k8s.gcr.io/k8s-dns-sidecar-amd64 "/sidecar --v=2 --lo…" 2 minutes ago Up 2 minutes k8s_sidecar_kube-dns-86f4d74b45-bv8wd_kube-system_373424cf-85e1-11e8-9637-080027cd64ab_0 545a491f01f1 k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64 "/dnsmasq-nanny -v=2…" 2 minutes ago Up 2 minutes k8s_dnsmasq_kube-dns-86f4d74b45-bv8wd_kube-system_373424cf-85e1-11e8-9637-080027cd64ab_0 8630e6b560ab gcr.io/k8s-minikube/storage-provisioner "/storage-provisioner" 2 minutes ago Up 2 minutes k8s_storage-provisioner_storage-provisioner_kube-system_384abe55-85e1-11e8-9637-080027cd64ab_0 3affd4396119 k8s.gcr.io/kubernetes-dashboard-amd64 "/dashboard --insecu…" 2 minutes ago Up 2 minutes k8s_kubernetes-dashboard_kubernetes-dashboard-5498ccf677-5sjxm_kube-system_38360dd0-85e1-11e8-9637-080027cd64ab_0 bcc9c91c92f5 k8s.gcr.io/kube-proxy-amd64 "/usr/local/bin/kube…" 2 minutes ago Up 2 minutes k8s_kube-proxy_kube-proxy-6pt84_kube-system_37364d8c-85e1-11e8-9637-080027cd64ab_0 a401a9b9b33a k8s.gcr.io/k8s-dns-kube-dns-amd64 "/kube-dns --domain=…" 2 minutes ago Up 2 minutes k8s_kubedns_kube-dns-86f4d74b45-bv8wd_kube-system_373424cf-85e1-11e8-9637-080027cd64ab_0 c648b2253c00 k8s.gcr.io/pause-amd64:3.1 "/pause" 2 minutes ago Up 2 minutes k8s_POD_storage-provisioner_kube-system_384abe55-85e1-11e8-9637-080027cd64ab_0 24336f98ea0b k8s.gcr.io/pause-amd64:3.1 "/pause" 2 minutes ago Up 2 minutes k8s_POD_kubernetes-dashboard-5498ccf677-5sjxm_kube-system_38360dd0-85e1-11e8-9637-080027cd64ab_0 08699a6c0e94 k8s.gcr.io/pause-amd64:3.1 "/pause" 2 minutes ago Up 2 minutes k8s_POD_kube-proxy-6pt84_kube-system_37364d8c-85e1-11e8-9637-080027cd64ab_0 ccb12b3e06ff k8s.gcr.io/pause-amd64:3.1 "/pause" 2 minutes ago Up 2 minutes k8s_POD_kube-dns-86f4d74b45-bv8wd_kube-system_373424cf-85e1-11e8-9637-080027cd64ab_0 379b2f59d509 k8s.gcr.io/etcd-amd64 "etcd --listen-clien…" 3 minutes ago Up 3 minutes k8s_etcd_etcd-minikube_kube-system_fbadda56e8c51c6fc78cc15a00a9ca7d_0 0ab87b233639 k8s.gcr.io/kube-apiserver-amd64 "kube-apiserver --ad…" 3 minutes ago Up 3 minutes k8s_kube-apiserver_kube-apiserver-minikube_kube-system_3ccdb40ea46941325fd6e036165fa2ca_0 6040e6d9d119 k8s.gcr.io/kube-scheduler-amd64 "kube-scheduler --ad…" 3 minutes ago Up 3 minutes k8s_kube-scheduler_kube-scheduler-minikube_kube-system_31cf0ccbee286239d451edb6fb511513_0 9bc478ce205e k8s.gcr.io/kube-controller-manager-amd64 "kube-controller-man…" 3 minutes ago Up 3 minutes k8s_kube-controller-manager_kube-controller-manager-minikube_kube-system_15842c134f307063d4032384881b742b_0 c950c63eef52 k8s.gcr.io/kube-addon-manager "/opt/kube-addons.sh" 3 minutes ago Up 3 minutes k8s_kube-addon-manager_kube-addon-manager-minikube_kube-system_3afaf06535cc3b85be93c31632b765da_0 ec256cb4c07c k8s.gcr.io/pause-amd64:3.1 "/pause" 4 minutes ago Up 4 minutes k8s_POD_kube-apiserver-minikube_kube-system_3ccdb40ea46941325fd6e036165fa2ca_0 8f122fbf4ad2 k8s.gcr.io/pause-amd64:3.1 "/pause" 4 minutes ago Up 4 minutes k8s_POD_etcd-minikube_kube-system_fbadda56e8c51c6fc78cc15a00a9ca7d_0 b051ff934ce2 k8s.gcr.io/pause-amd64:3.1 "/pause" 4 minutes ago Up 4 minutes k8s_POD_kube-scheduler-minikube_kube-system_31cf0ccbee286239d451edb6fb511513_0 af42a2536fc5 k8s.gcr.io/pause-amd64:3.1 "/pause" 4 minutes ago Up 4 minutes k8s_POD_kube-addon-manager-minikube_kube-system_3afaf06535cc3b85be93c31632b765da_0 9db9ba0077c9 k8s.gcr.io/pause-amd64:3.1 "/pause" 4 minutes ago Up 4 minutes k8s_POD_kube-controller-manager-minikube_kube-system_15842c134f307063d4032384881b742b_0


FYI.
<img width="713" alt="screen shot 2018-07-12 at 18 30 06" src="https://user-images.githubusercontent.com/12527842/42649507-d1d9be9c-8601-11e8-9090-fbdbbbdd14db.png">

More on the K8s jargon [here](https://kubernetes.io/docs/tutorials/kubernetes-basics/explore/explore-intro/) as well as some images denoting architecture.

Setup a local registry, so Kubernetes can pull image(s) from there opposed to publicly.

$ docker run -d -p 5000:5000 --restart=always --name registry registry:2 Unable to find image 'registry:2' locally 2: Pulling from library/registry 4064ffdc82fe: Pull complete c12c92d1c5a2: Pull complete 4fbc9b6835cc: Pull complete 765973b0f65f: Pull complete 3968771a7c3a: Pull complete Digest: sha256:51bb55f23ef7e25ac9b8313b139a8dd45baa832943c8ad8f7da2ddad6355b3c8 Status: Downloaded newer image for registry:2 80a4701faae043ff9feee10cf51e6554c080d403eceb249f0c3bbd99275c1b3b

$ docker ps -a | grep reg CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 80a4701faae0 registry:2 "/entrypoint.sh /etc…" 6 seconds ago Up 5 seconds 0.0.0.0:5000->5000/tcp registry


Create a directory to work out of.

`$ vi Dockerfile`

FROM docker.elastic.co/elasticsearch/elasticsearch:6.3.1

Create a basic deployment and service file.
`$ vi elasticsearch-srv.yml`

apiVersion: extensions/v1beta1 kind: Deployment metadata: labels: run: elasticsearch name: elasticsearch spec: replicas: 1 selector: matchLabels: run: elasticsearch-exposed template: metadata: labels: run: elasticsearch-exposed spec: containers:

apiVersion: v1 kind: Service metadata: labels: run: elasticsearch name: elasticsearch spec: ports:

$ docker tag elasticsearch631 localhost:5000/elasticsearch631

$ docker push localhost:5000/elasticsearch631 The push refers to repository [localhost:5000/elasticsearch631] 95aa6851545e: Pushed 62bea52fec7a: Pushed c509aa7073b4: Pushed 2a161ac3381b: Pushed 3cfabbe08a15: Pushed 3e419e8be9df: Pushed 8f826d39fe4c: Pushed bcc97fbfc9e1: Pushed latest: digest: sha256:178051b116c91ae525369f3468aec167fb2c1cd90456e86c717cb1d135b8595e size: 1997


Remove the locally pulled and cached elasticsearch image, so that we can pull solely the image from our registry.

$ docker image remove fa7212eab151 --force

$ docker pull localhost:5000/elasticsearch631 Using default tag: latest latest: Pulling from elasticsearch631 7dc0dca2b151: Pull complete d781ed11f72a: Pull complete 1750e875cdfc: Pull complete c41f251a2369: Pull complete 75f1d1b20ebc: Pull complete 7a5561323db1: Pull complete ee76915fb2ed: Pull complete 6df425d0ed88: Pull complete Digest: sha256:178051b116c91ae525369f3468aec167fb2c1cd90456e86c717cb1d135b8595e Status: Downloaded newer image for localhost:5000/elasticsearch631:latest


I found this to be an issue here so modify the `vm.max_map_count` param from inside of the Minikube virtual machine.

$ minikube ssh
( ) ( )
(_) ()| |/') | |_
/' ` \| |/' _| || , < ( ) ( )| '_`\ /'
\ | ( ) ( ) || || ( ) || || |\\ | () || |) )( _
/ () () ()()() ()()() (_)\___/'(_,__/'____)

$ sudo sysctl vm.max_map_count vm.max_map_count = 65530

$ sudo sysctl -w vm.max_map_count=262144 vm.max_map_count = 262144

$ sudo sysctl vm.max_map_count vm.max_map_count = 262144

$ exit


Using the yaml file create above, use kubectl (kubectl controls the Kubernetes cluster manager) to create a Kubernetes deployment and service.

$ kubectl create -f elasticsearch-srv.yml deployment.extensions/app created service/app created

Verify.

$ kubectl get all NAME READY STATUS RESTARTS AGE pod/elasticsearch-54d8f995b8-hwpcd 1/1 Running 0 5s

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/elasticsearch NodePort 10.98.183.62 9200:30533/TCP 6s service/kubernetes ClusterIP 10.96.0.1 443/TCP 33m

NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE deployment.apps/elasticsearch 1 1 1 1 7s

NAME DESIRED CURRENT READY AGE replicaset.apps/elasticsearch-54d8f995b8 1 1 1 6s


The K8s dashboard!

$ minikube dashboard Opening kubernetes dashboard in default browser...

Which actually automatically opens the K8s dashboard in your default browser.

<img width="2541" alt="screen shot 2018-07-15 at 21 01 43" src="https://user-images.githubusercontent.com/12527842/42737996-370e7766-8874-11e8-8522-4ef52261cf8b.png">

Find the Elasticsearch logs under the pod settings dropdown list.
<img width="2543" alt="screen shot 2018-07-15 at 21 02 09" src="https://user-images.githubusercontent.com/12527842/42738004-4fb82172-8874-11e8-9bd7-97573932b354.png">

For example.
<img width="2279" alt="screen shot 2018-07-15 at 21 02 19" src="https://user-images.githubusercontent.com/12527842/42738005-5df3ff2c-8874-11e8-8dcb-12a604f55800.png">

Get the Elasticsearch service URI and port number, for you to start cracking away here.

$ minikube service elasticsearch --url http://192.168.99.100:30533


<img width="914" alt="screen shot 2018-07-15 at 21 03 37" src="https://user-images.githubusercontent.com/12527842/42738300-8ca9edc2-8879-11e8-8ccc-b3f1c4e43228.png">

<img width="943" alt="screen shot 2018-07-15 at 21 03 53" src="https://user-images.githubusercontent.com/12527842/42738299-8c913354-8879-11e8-855b-6645fcfec942.png">

$ curl http://192.168.99.100:30533/_cluster/health?pretty { "cluster_name" : "docker-cluster", "status" : "green", "timed_out" : false, "number_of_nodes" : 1, "number_of_data_nodes" : 1, "active_primary_shards" : 0, "active_shards" : 0, "relocating_shards" : 0, "initializing_shards" : 0, "unassigned_shards" : 0, "delayed_unassigned_shards" : 0, "number_of_pending_tasks" : 0, "number_of_in_flight_fetch" : 0, "task_max_waiting_in_queue_millis" : 0, "active_shards_percent_as_number" : 100.0 }


And away we go.

Scale the deployment.
<img width="2306" alt="screen shot 2018-07-15 at 21 54 50" src="https://user-images.githubusercontent.com/12527842/42738319-dc7edd12-8879-11e8-9f13-acda29584c22.png">

<img width="559" alt="screen shot 2018-07-15 at 21 55 42" src="https://user-images.githubusercontent.com/12527842/42738321-dfafb16e-8879-11e8-9d34-add119a450f6.png">

Delete the deployment.
<img width="622" alt="screen shot 2018-07-15 at 21 56 16" src="https://user-images.githubusercontent.com/12527842/42738324-ef973430-8879-11e8-8307-28c761e5406f.png">

See the services remaining. Delete service.
<img width="1467" alt="screen shot 2018-07-15 at 21 57 37" src="https://user-images.githubusercontent.com/12527842/42738338-3cf2ad54-887a-11e8-9f7e-d380b739c0d2.png">

The About page.
<img width="718" alt="screen shot 2018-07-15 at 21 58 55" src="https://user-images.githubusercontent.com/12527842/42738347-546c2a50-887a-11e8-9f2d-f296c8b51e23.png">

Documentation link sends you to [here](https://kubernetes.io/docs/tasks/access-application-cluster/web-ui-dashboard/).

Finished. CLI delete deployment and service. Stop the Minikube VM and delete it.

$ kubectl delete deploy elasticsearch deployment.extensions "app" deleted

$ kubectl delete service elasticsearch service "app" deleted

$ minikube stop Stopping local Kubernetes cluster... Machine stopped.

$ minikube delete Deleting local Kubernetes cluster... Machine deleted.


(1)
This could be a good shout for building a quick K8s service to play with the Elastic Beats Kubernetes module. See [here](https://www.elastic.co/guide/en/beats/metricbeat/current/metricbeat-module-kubernetes.html) for our beats docs more. Pertaining to v6.3.x.

(2)
Helm.
- Helm is a tool that streamlines installing and managing Kubernetes applications. Think of it like apt/yum/homebrew for Kubernetes.
- Helm has two parts: a client (helm) and a server (tiller).
- Tiller runs inside of your Kubernetes cluster, and manages releases (installations) of your charts.
- Helm runs on your laptop, CI/CD, or wherever you want it to run.
- If you want to use the package manager. MacOS/homebrew folks can use `brew install kubernetes-helm`

Elasticsearch deployments are maintained [here](https://github.com/clockworksoul/helm-elasticsearch). Note that, with Helm properly installed and configured, standing up a cluster is trivial.

$ git clone https://github.com/clockworksoul/helm-elasticsearch.git $ helm install helm-elasticsearch


Various parameters of said cluster, including replica count and memory allocation can be adjusted by modifying the `helm-elasticsearch/values.yaml` file. For information about Helm, see [here](https://github.com/kubernetes/helm/blob/master/docs/index.md).
Jymit commented 6 years ago

Dockerfile

FROM httpd:2.4-alpine

COPY ./index.html /usr/local/apache2/htdocs/

app.yml

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  labels:
    run: app
  name: app
spec:
  replicas: 1
  selector:
    matchLabels:
      run: app-exposed
  template:
    metadata:
      labels:
        run: app-exposed
    spec:
      containers:
      - image: localhost:5000/app:0.1.0
        name: app
        ports:
        - containerPort: 80
          protocol: TCP

---

# APP SERVICE

apiVersion: v1
kind: Service
metadata:
  labels:
    run: app
  name: app
spec:
  ports:
  - port: 80
    protocol: TCP
    targetPort: 80
  selector:
    run: app-exposed
  type: NodePort
$ docker build . --tag app
Sending build context to Docker daemon  4.608kB
Step 1/2 : FROM httpd:2.4-alpine
2.4-alpine: Pulling from library/httpd
911c6d0c7995: Pull complete 
fb560bf76af3: Pull complete 
b077eec28e12: Pull complete 
cbb10f3684e5: Pull complete 
28b16b995d79: Pull complete 
Digest: sha256:cd4598d3397ed391b8c996d686a3f939cd8e672d31b758faa298a23aaddfa394
Status: Downloaded newer image for httpd:2.4-alpine
 ---> 73a557ff177a
Step 2/2 : COPY ./index.html /usr/local/apache2/htdocs/
 ---> 05051b93e048
Successfully built 05051b93e048
Successfully tagged app:latest
$ docker tag app localhost:5000/app:0.1.0

$ docker images
REPOSITORY                                 TAG                 IMAGE ID            CREATED             SIZE
localhost:5000/app                      0.1.0               05051b93e048        49 seconds ago      91.3MB
app                                     latest              05051b93e048        49 seconds ago      91.3MB
registry                                   2                   b2b03e9146e1        5 days ago          33.3MB
httpd                                      2.4-alpine          73a557ff177a        5 days ago          91.3MB
k8s.gcr.io/kube-proxy-amd64                v1.10.0             bfc21aadc7d3        3 months ago        97MB
k8s.gcr.io/kube-apiserver-amd64            v1.10.0             af20925d51a3        3 months ago        225MB
k8s.gcr.io/kube-controller-manager-amd64   v1.10.0             ad86dbed1555        3 months ago        148MB
k8s.gcr.io/kube-scheduler-amd64            v1.10.0             704ba848e69a        3 months ago        50.4MB
k8s.gcr.io/etcd-amd64                      3.1.12              52920ad46f5b        4 months ago        193MB
k8s.gcr.io/kube-addon-manager              v8.6                9c16409588eb        4 months ago        78.4MB
k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64     1.14.8              c2ce1ffb51ed        6 months ago        41MB
k8s.gcr.io/k8s-dns-sidecar-amd64           1.14.8              6f7f2dc7fab5        6 months ago        42.2MB
k8s.gcr.io/k8s-dns-kube-dns-amd64          1.14.8              80cc5ea4b547        6 months ago        50.5MB
k8s.gcr.io/pause-amd64                     3.1                 da86e6ba6ca1        6 months ago        742kB
k8s.gcr.io/kubernetes-dashboard-amd64      v1.8.1              e94d2f21bc0c        6 months ago        121MB
gcr.io/k8s-minikube/storage-provisioner    v1.8.1              4689081edb10        8 months ago        80.8MB
$ kubectl create -f app.yml
deployment.extensions/app created
service/app created
$ kubectl get all
NAME                         READY     STATUS    RESTARTS   AGE
pod/app-9bdf76c69-p4tnn   1/1       Running   0          13s

NAME                 TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE
service/kubernetes   ClusterIP   10.96.0.1        <none>        443/TCP        7m
service/app       NodePort    10.103.225.139   <none>        80:30654/TCP   13s

NAME                     DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/app   1         1         1            1           13s

NAME                               DESIRED   CURRENT   READY     AGE
replicaset.apps/app-9bdf76c69   1         1         1         13s
$ minikube service app --url
http://192.168.99.100:30654
screen shot 2018-07-12 at 15 46 39
$ minikube dashboard
Opening kubernetes dashboard in default browser...

& auto opens browser to: http://192.168.99.100:30000/#!/overview?namespace=default

screen shot 2018-07-12 at 15 47 39
Jymit commented 5 years ago

Kubernetes (K8s) - Minikube

At the time of writing Elasticsearch 6.3.1, macOS v10.13.5

Minikube supports Kubernetes features such as:

Minikube requires that VT-x/AMD-v virtualization is enabled in BIOS. To check that this is enabled on OSX (macOS) run:

$ sysctl -a | grep machdep.cpu.features | grep VMX --color
machdep.cpu.features: FPU VME DE PSE TSC MSR PAE MCE CX8 APIC SEP MTRR PGE MCA CMOV PAT PSE36 CLFSH DS ACPI MMX FXSR SSE SSE2 SS HTT TM PBE SSE3 PCLMULQDQ DTES64 MON DSCPL VMX SMX EST TM2 SSSE3 FMA CX16 TPR PDCM SSE4.1 SSE4.2 x2APIC MOVBE POPCNT AES PCID XSAVE OSXSAVE SEGLIM64 TSCTMR AVX1.0 RDRAND F16C

Prerequisites.

$ docker-compose --version docker-compose version 1.21.1, build 5a3f1a3

$ docker-machine --version docker-machine version 0.14.0, build 89b8332

$ minikube version minikube version: v0.28.0

$ kubectl version --client Client Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.0", GitCommit:"91e7b4fd31fcd3d5f436da26c980becec37ceefe", GitTreeState:"clean", BuildDate:"2018-06-27T22:29:25Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"darwin/amd64"}

$ minikube start Starting local Kubernetes v1.10.0 cluster... Starting VM... Downloading Minikube ISO 153.08 MB / 153.08 MB [============================================] 100.00% 0s

Getting VM IP address... Moving files into cluster... Downloading kubeadm v1.10.0 Downloading kubelet v1.10.0 Finished Downloading kubelet v1.10.0 Finished Downloading kubeadm v1.10.0 Setting up certs... Connecting to cluster... Setting up kubeconfig... Starting cluster components... Kubectl is now configured to use the cluster. Loading cached images from config file.

Wanting to see the minikube dir.

$ ll ~/.minikube total 104 drwxr-xr-x 23 khondhu staff 736 12 Jul 15:36 . drwxr-xr-x+ 66 khondhu staff 2112 12 Jul 15:44 .. drwxr-xr-x 2 khondhu staff 64 12 Jul 15:34 addons -rw-r--r-- 1 khondhu staff 1298 12 Jul 15:36 apiserver.crt -rw------- 1 khondhu staff 1675 12 Jul 15:36 apiserver.key -rw-r--r-- 1 khondhu staff 1066 12 Jul 15:36 ca.crt -rw------- 1 khondhu staff 1679 12 Jul 15:36 ca.key -r----x--x 1 khondhu staff 1038 12 Jul 15:35 ca.pem drwxr-xr-x 5 khondhu staff 160 12 Jul 15:35 cache -r----x--x 1 khondhu staff 1078 12 Jul 15:35 cert.pem drwxr-xr-x 6 khondhu staff 192 12 Jul 15:34 certs -rw-r--r-- 1 khondhu staff 1103 12 Jul 15:36 client.crt -rw------- 1 khondhu staff 1679 12 Jul 15:36 client.key drwxr-xr-x 2 khondhu staff 64 12 Jul 15:34 config drwxr-xr-x 2 khondhu staff 64 12 Jul 15:34 files -r----x--x 1 khondhu staff 1675 12 Jul 15:35 key.pem drwxr-xr-x 2 khondhu staff 64 12 Jul 15:34 logs drwxr-xr-x 4 khondhu staff 128 12 Jul 15:54 machines drwx------ 3 khondhu staff 96 12 Jul 15:35 profiles -rw-r--r-- 1 khondhu staff 1074 12 Jul 15:36 proxy-client-ca.crt -rw------- 1 khondhu staff 1675 12 Jul 15:36 proxy-client-ca.key -rw-r--r-- 1 khondhu staff 1103 12 Jul 15:36 proxy-client.crt -rw------- 1 khondhu staff 1679 12 Jul 15:36 proxy-client.key

$ kubectl get nodes NAME STATUS ROLES AGE VERSION minikube Ready master 1m v1.10.0


`$ eval $(minikube docker-env)`
Add this line to .bash_profile if you want to use minikube's daemon by default (or if you do not want to set this every time you open a new terminal).
You can revert back to the host docker daemon by running:
`$ eval $(docker-machine env -u)`

$ eval $(minikube docker-env)

$ docker ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES a681f2efe157 k8s.gcr.io/k8s-dns-sidecar-amd64 "/sidecar --v=2 --lo…" 2 minutes ago Up 2 minutes k8s_sidecar_kube-dns-86f4d74b45-bv8wd_kube-system_373424cf-85e1-11e8-9637-080027cd64ab_0 545a491f01f1 k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64 "/dnsmasq-nanny -v=2…" 2 minutes ago Up 2 minutes k8s_dnsmasq_kube-dns-86f4d74b45-bv8wd_kube-system_373424cf-85e1-11e8-9637-080027cd64ab_0 8630e6b560ab gcr.io/k8s-minikube/storage-provisioner "/storage-provisioner" 2 minutes ago Up 2 minutes k8s_storage-provisioner_storage-provisioner_kube-system_384abe55-85e1-11e8-9637-080027cd64ab_0 3affd4396119 k8s.gcr.io/kubernetes-dashboard-amd64 "/dashboard --insecu…" 2 minutes ago Up 2 minutes k8s_kubernetes-dashboard_kubernetes-dashboard-5498ccf677-5sjxm_kube-system_38360dd0-85e1-11e8-9637-080027cd64ab_0 bcc9c91c92f5 k8s.gcr.io/kube-proxy-amd64 "/usr/local/bin/kube…" 2 minutes ago Up 2 minutes k8s_kube-proxy_kube-proxy-6pt84_kube-system_37364d8c-85e1-11e8-9637-080027cd64ab_0 a401a9b9b33a k8s.gcr.io/k8s-dns-kube-dns-amd64 "/kube-dns --domain=…" 2 minutes ago Up 2 minutes k8s_kubedns_kube-dns-86f4d74b45-bv8wd_kube-system_373424cf-85e1-11e8-9637-080027cd64ab_0 c648b2253c00 k8s.gcr.io/pause-amd64:3.1 "/pause" 2 minutes ago Up 2 minutes k8s_POD_storage-provisioner_kube-system_384abe55-85e1-11e8-9637-080027cd64ab_0 24336f98ea0b k8s.gcr.io/pause-amd64:3.1 "/pause" 2 minutes ago Up 2 minutes k8s_POD_kubernetes-dashboard-5498ccf677-5sjxm_kube-system_38360dd0-85e1-11e8-9637-080027cd64ab_0 08699a6c0e94 k8s.gcr.io/pause-amd64:3.1 "/pause" 2 minutes ago Up 2 minutes k8s_POD_kube-proxy-6pt84_kube-system_37364d8c-85e1-11e8-9637-080027cd64ab_0 ccb12b3e06ff k8s.gcr.io/pause-amd64:3.1 "/pause" 2 minutes ago Up 2 minutes k8s_POD_kube-dns-86f4d74b45-bv8wd_kube-system_373424cf-85e1-11e8-9637-080027cd64ab_0 379b2f59d509 k8s.gcr.io/etcd-amd64 "etcd --listen-clien…" 3 minutes ago Up 3 minutes k8s_etcd_etcd-minikube_kube-system_fbadda56e8c51c6fc78cc15a00a9ca7d_0 0ab87b233639 k8s.gcr.io/kube-apiserver-amd64 "kube-apiserver --ad…" 3 minutes ago Up 3 minutes k8s_kube-apiserver_kube-apiserver-minikube_kube-system_3ccdb40ea46941325fd6e036165fa2ca_0 6040e6d9d119 k8s.gcr.io/kube-scheduler-amd64 "kube-scheduler --ad…" 3 minutes ago Up 3 minutes k8s_kube-scheduler_kube-scheduler-minikube_kube-system_31cf0ccbee286239d451edb6fb511513_0 9bc478ce205e k8s.gcr.io/kube-controller-manager-amd64 "kube-controller-man…" 3 minutes ago Up 3 minutes k8s_kube-controller-manager_kube-controller-manager-minikube_kube-system_15842c134f307063d4032384881b742b_0 c950c63eef52 k8s.gcr.io/kube-addon-manager "/opt/kube-addons.sh" 3 minutes ago Up 3 minutes k8s_kube-addon-manager_kube-addon-manager-minikube_kube-system_3afaf06535cc3b85be93c31632b765da_0 ec256cb4c07c k8s.gcr.io/pause-amd64:3.1 "/pause" 4 minutes ago Up 4 minutes k8s_POD_kube-apiserver-minikube_kube-system_3ccdb40ea46941325fd6e036165fa2ca_0 8f122fbf4ad2 k8s.gcr.io/pause-amd64:3.1 "/pause" 4 minutes ago Up 4 minutes k8s_POD_etcd-minikube_kube-system_fbadda56e8c51c6fc78cc15a00a9ca7d_0 b051ff934ce2 k8s.gcr.io/pause-amd64:3.1 "/pause" 4 minutes ago Up 4 minutes k8s_POD_kube-scheduler-minikube_kube-system_31cf0ccbee286239d451edb6fb511513_0 af42a2536fc5 k8s.gcr.io/pause-amd64:3.1 "/pause" 4 minutes ago Up 4 minutes k8s_POD_kube-addon-manager-minikube_kube-system_3afaf06535cc3b85be93c31632b765da_0 9db9ba0077c9 k8s.gcr.io/pause-amd64:3.1 "/pause" 4 minutes ago Up 4 minutes k8s_POD_kube-controller-manager-minikube_kube-system_15842c134f307063d4032384881b742b_0


FYI.
<img width="713" alt="screen shot 2018-07-12 at 18 30 06" src="https://user-images.githubusercontent.com/12527842/42649507-d1d9be9c-8601-11e8-9090-fbdbbbdd14db.png">

More on the K8s jargon [here](https://kubernetes.io/docs/tutorials/kubernetes-basics/explore/explore-intro/) as well as some images denoting architecture.

Setup a local registry, so Kubernetes can pull image(s) from there opposed to publicly.

$ docker run -d -p 5000:5000 --restart=always --name registry registry:2 Unable to find image 'registry:2' locally 2: Pulling from library/registry 4064ffdc82fe: Pull complete c12c92d1c5a2: Pull complete 4fbc9b6835cc: Pull complete 765973b0f65f: Pull complete 3968771a7c3a: Pull complete Digest: sha256:51bb55f23ef7e25ac9b8313b139a8dd45baa832943c8ad8f7da2ddad6355b3c8 Status: Downloaded newer image for registry:2 80a4701faae043ff9feee10cf51e6554c080d403eceb249f0c3bbd99275c1b3b

$ docker ps -a | grep reg CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 80a4701faae0 registry:2 "/entrypoint.sh /etc…" 6 seconds ago Up 5 seconds 0.0.0.0:5000->5000/tcp registry


Create a directory to work out of.

`$ vi Dockerfile`

FROM docker.elastic.co/elasticsearch/elasticsearch:6.3.1

Create a basic deployment and service file.
`$ vi elasticsearch-srv.yml`

apiVersion: extensions/v1beta1 kind: Deployment metadata: labels: run: elasticsearch name: elasticsearch spec: replicas: 1 selector: matchLabels: run: elasticsearch-exposed template: metadata: labels: run: elasticsearch-exposed spec: containers:

apiVersion: v1 kind: Service metadata: labels: run: elasticsearch name: elasticsearch spec: ports:

$ docker tag elasticsearch631 localhost:5000/elasticsearch631

$ docker push localhost:5000/elasticsearch631 The push refers to repository [localhost:5000/elasticsearch631] 95aa6851545e: Pushed 62bea52fec7a: Pushed c509aa7073b4: Pushed 2a161ac3381b: Pushed 3cfabbe08a15: Pushed 3e419e8be9df: Pushed 8f826d39fe4c: Pushed bcc97fbfc9e1: Pushed latest: digest: sha256:178051b116c91ae525369f3468aec167fb2c1cd90456e86c717cb1d135b8595e size: 1997


Remove the locally pulled and cached elasticsearch image, so that we can pull solely the image from our registry.

$ docker image remove fa7212eab151 --force

$ docker pull localhost:5000/elasticsearch631 Using default tag: latest latest: Pulling from elasticsearch631 7dc0dca2b151: Pull complete d781ed11f72a: Pull complete 1750e875cdfc: Pull complete c41f251a2369: Pull complete 75f1d1b20ebc: Pull complete 7a5561323db1: Pull complete ee76915fb2ed: Pull complete 6df425d0ed88: Pull complete Digest: sha256:178051b116c91ae525369f3468aec167fb2c1cd90456e86c717cb1d135b8595e Status: Downloaded newer image for localhost:5000/elasticsearch631:latest


I found this to be an issue here so modify the `vm.max_map_count` param from inside of the Minikube virtual machine.

$ minikube ssh
( ) ( )
(_) ()| |/') | |_
/' ` \| |/' _| || , < ( ) ( )| '_`\ /'
\ | ( ) ( ) || || ( ) || || |\\ | () || |) )( _
/ () () ()()() ()()() (_)\___/'(_,__/'____)

$ sudo sysctl vm.max_map_count vm.max_map_count = 65530

$ sudo sysctl -w vm.max_map_count=262144 vm.max_map_count = 262144

$ sudo sysctl vm.max_map_count vm.max_map_count = 262144

$ exit


Using the yaml file create above, use kubectl (kubectl controls the Kubernetes cluster manager) to create a Kubernetes deployment and service.

$ kubectl create -f elasticsearch-srv.yml deployment.extensions/elasticsearch created service/elasticsearch created

Verify.

$ kubectl get all NAME READY STATUS RESTARTS AGE pod/elasticsearch-54d8f995b8-hwpcd 1/1 Running 0 5s

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/elasticsearch NodePort 10.98.183.62 9200:30533/TCP 6s service/kubernetes ClusterIP 10.96.0.1 443/TCP 33m

NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE deployment.apps/elasticsearch 1 1 1 1 7s

NAME DESIRED CURRENT READY AGE replicaset.apps/elasticsearch-54d8f995b8 1 1 1 6s


The K8s dashboard!

$ minikube dashboard Opening kubernetes dashboard in default browser...

Which actually automatically opens the K8s dashboard in your default browser.

<img width="2541" alt="screen shot 2018-07-15 at 21 01 43" src="https://user-images.githubusercontent.com/12527842/42737996-370e7766-8874-11e8-8522-4ef52261cf8b.png">

Find the Elasticsearch logs under the pod settings dropdown list.
<img width="2543" alt="screen shot 2018-07-15 at 21 02 09" src="https://user-images.githubusercontent.com/12527842/42738004-4fb82172-8874-11e8-9bd7-97573932b354.png">

For example.
<img width="2279" alt="screen shot 2018-07-15 at 21 02 19" src="https://user-images.githubusercontent.com/12527842/42738005-5df3ff2c-8874-11e8-8dcb-12a604f55800.png">

Get the Elasticsearch service URI and port number, for you to start cracking away.

$ minikube service elasticsearch --url http://192.168.99.100:30533


<img width="914" alt="screen shot 2018-07-15 at 21 03 37" src="https://user-images.githubusercontent.com/12527842/42738300-8ca9edc2-8879-11e8-8ccc-b3f1c4e43228.png">

<img width="943" alt="screen shot 2018-07-15 at 21 03 53" src="https://user-images.githubusercontent.com/12527842/42738299-8c913354-8879-11e8-855b-6645fcfec942.png">

$ curl http://192.168.99.100:30533/_cluster/health?pretty { "cluster_name" : "docker-cluster", "status" : "green", "timed_out" : false, "number_of_nodes" : 1, "number_of_data_nodes" : 1, "active_primary_shards" : 0, "active_shards" : 0, "relocating_shards" : 0, "initializing_shards" : 0, "unassigned_shards" : 0, "delayed_unassigned_shards" : 0, "number_of_pending_tasks" : 0, "number_of_in_flight_fetch" : 0, "task_max_waiting_in_queue_millis" : 0, "active_shards_percent_as_number" : 100.0 }


And away we go.

Scale the deployment.
<img width="2306" alt="screen shot 2018-07-15 at 21 54 50" src="https://user-images.githubusercontent.com/12527842/42738319-dc7edd12-8879-11e8-9f13-acda29584c22.png">

<img width="559" alt="screen shot 2018-07-15 at 21 55 42" src="https://user-images.githubusercontent.com/12527842/42738321-dfafb16e-8879-11e8-9d34-add119a450f6.png">

Delete the deployment.
<img width="622" alt="screen shot 2018-07-15 at 21 56 16" src="https://user-images.githubusercontent.com/12527842/42738324-ef973430-8879-11e8-8307-28c761e5406f.png">

See the services remaining. Delete service.
<img width="1467" alt="screen shot 2018-07-15 at 21 57 37" src="https://user-images.githubusercontent.com/12527842/42738338-3cf2ad54-887a-11e8-9f7e-d380b739c0d2.png">

The About page.
<img width="718" alt="screen shot 2018-07-15 at 21 58 55" src="https://user-images.githubusercontent.com/12527842/42738347-546c2a50-887a-11e8-9f2d-f296c8b51e23.png">

Documentation link sends you to [here](https://kubernetes.io/docs/tasks/access-application-cluster/web-ui-dashboard/).

Finished. CLI delete deployment and service. Stop the Minikube VM and delete it.

$ kubectl delete deploy elasticsearch deployment.extensions "elasticsearch" deleted

$ kubectl delete service elasticsearch service "elasticsearch" deleted

$ minikube stop Stopping local Kubernetes cluster... Machine stopped.

$ minikube delete Deleting local Kubernetes cluster... Machine deleted.


(1)
This could be a good shout for building a quick K8s deployment/service to play with the Elastic Beats Kubernetes module. See [here](https://www.elastic.co/guide/en/beats/metricbeat/current/metricbeat-module-kubernetes.html) for our beats docs, pertaining to v6.3.x.

(2)
Helm.
- Helm is a tool that streamlines installing and managing Kubernetes applications. Think of it like apt/yum/homebrew for Kubernetes.
- Helm has two parts: a client (helm) and a server (tiller).
- Tiller runs inside of your Kubernetes cluster, and manages releases (installations) of your charts.
- Helm runs on your laptop, CI/CD, or wherever you want it to run.
- If you want to use the package manager. MacOS/homebrew folks can use `brew install kubernetes-helm`

Elasticsearch deployments are maintained [here](https://github.com/clockworksoul/helm-elasticsearch). Note that, with Helm properly installed and configured, standing up a cluster is trivial.

$ git clone https://github.com/clockworksoul/helm-elasticsearch.git $ helm install helm-elasticsearch


Various parameters of said cluster, including replica count and memory allocation can be adjusted by modifying the `helm-elasticsearch/values.yaml` file. For information about Helm, see [here](https://github.com/kubernetes/helm/blob/master/docs/index.md).