rifaterdemsahin / aif

Adaptive Intelligence Framework
5 stars 7 forks source link

K8 Services #364

Open rifaterdemsahin opened 5 years ago

rifaterdemsahin commented 5 years ago

What the services in the kubernetes? How do they differ from the containers in k8 ? How do we expose them ?

AndreV84 commented 5 years ago

test

AndreV84 commented 5 years ago

What the services in the kubernetes? "A Kubernetes Service is an abstraction which defines a logical set of Pods and a policy by which to access them - sometimes called a micro-service. The set of Pods targeted by a Service is (usually) determined by a Label Selector (see below for why you might want a Service without a selector).

As an example, consider an image-processing backend which is running with 3 replicas. Those replicas are fungible - frontends do not care which backend they use. While the actual Pods that compose the backend set may change, the frontend clients should not need to be aware of that or keep track of the list of backends themselves. The Service abstraction enables this decoupling." source: https://kubernetes.io/docs/concepts/services-networking/service/

"Services are sets of pods with a network endpoint that can be used for discovery and load balancing. Ingresses are collections of rules for routing external HTTP(S) traffic to services" source: Google Cloud Platform

How do they differ from the containers in k8 ? You should have meant how do they differ with pods, in my opinion. As containers is just an entry in the container storage that has an image of some software. "A pod (as in a pod of whales or pea pod) is a group of one or more containers (such as Docker containers), with shared storage/network, and a specification for how to run the containers. A pod’s contents are always co-located and co-scheduled, and run in a shared context. A pod models an application-specific “logical host” - it contains one or more application containers which are relatively tightly coupled — in a pre-container world, they would have executed on the same physical or virtual machine." source : https://kubernetes.io/docs/concepts/workloads/pods/pod/#what-is-a-pod How do we expose them ? You should have meant how to expose a deployment, in my opinion. For example: kubectl expose deployment nginx --port 80 --type LoadBalancer you may find more information at https://kubernetes.io/docs/tutorials/kubernetes-basics/expose/expose-intro/

AndreV84 commented 5 years ago

the below is the example of services

Setting up HTTP Load Balancing with Ingress This tutorial shows how to run a web application behind an HTTP load balancer by configuring the Ingress resource.

Background GKE offers integrated support for two types of cloud load balancing for a publicly accessible application:

You can create TCP/UDP load balancers by specifying type: LoadBalancer on a Service resource manifest. Although a TCP load balancer works for HTTP web servers, they are not designed to terminate HTTP(S) traffic as they are not aware of individual HTTP(S) requests. GKE does not configure any health checks for TCP/UDP load balancers. See the Guestbook tutorial for an example of this type of load balancer.

You can create HTTP(S) load balancers by using an Ingress resource. HTTP(S) load balancers are designed to terminate HTTP(S) requests and can make better context-aware load balancing decisions. They offer features like customizable URL maps and TLS termination. GKE automatically configures health checks for HTTP(S) load balancers.

If you are exposing an HTTP(S) service hosted on GKE, HTTP(S) load balancing is the recommended method for load balancing.

Note: The load balancers created by the GKE are billed per the regular Load Balancer pricing. Before you begin Take the following steps to enable the Kubernetes Engine API: Visit the Kubernetes Engine page in the Google Cloud Platform Console. Create or select a project. Wait for the API and related services to be enabled. This can take several minutes. Make sure that billing is enabled for your project.

LEARN HOW TO ENABLE BILLING

Install the following command-line tools used in this tutorial:

gcloud is used to create and delete Kubernetes Engine clusters. gcloud is included in the Google Cloud SDK. kubectl is used to manage Kubernetes, the cluster orchestration system used by Kubernetes Engine. You can install kubectl using gcloud: gcloud components install kubectl Set defaults for the gcloud command-line tool To save time typing your project ID and Compute Engine zone options in the gcloud command-line tool, you can set the defaults: gcloud config set project PROJECT_ID gcloud config set compute/zone us-central1-b Create a container cluster Create a container cluster named loadbalancedcluster by running:

gcloud container clusters create loadbalancedcluster Note: If you are using an existing Google Kubernetes Engine cluster or if you have created a cluster through Google Cloud Platform Console, you need to run the following command to retrieve cluster credentials and configure kubectl command-line tool with them: gcloud container clusters get-credentials loadbalancedcluster If you have already created a cluster with the gcloud container clusters create command listed above, this step is not necessary. Step 1: Deploy a web application Create a Deployment using the sample web application container image that listens on a HTTP server on port 8080:

kubectl run web --image=gcr.io/google-samples/hello-app:1.0 --port=8080 Step 2: Expose your Deployment as a Service internally ### Create a Service resource to make the web deployment reachable within your container cluster:

kubectl expose deployment web --target-port=8080 --type=NodePort source: https://cloud.google.com/kubernetes-engine/docs/tutorials/http-balancer

rifaterdemsahin commented 5 years ago

thanks for the text I am much move visual person please include the visuals moving this task to trello

rifaterdemsahin commented 5 years ago

Link is here https://trello.com/c/DCHCVk4l/1-kubernetes-services for this sample task

Please use the trello for the new tasks.