Open anitsh opened 3 years ago
Final Task #:+1:
You have started a new role as a Junior Cloud Engineer for Jooli, Inc. You are expected to help manage the infrastructure at Jooli. Common tasks include provisioning resources for projects.
You are expected to have the skills and knowledge for these tasks, so step-by-step guides are not provided.
Some Jooli, Inc. standards you should follow:
Create all resources in the default region or zone, unless otherwise directed.
Naming normally uses the format team-resource; for
example, an instance could be named nucleus-webserver1
.
Allocate cost-effective resource sizes. Projects are
monitored, and excessive resource use will result in the
containing project's termination (and possibly yours), so
plan carefully. This is the guidance the monitoring team
is willing to share: unless directed, use f1-micro
for
small Linux VMs, and use n1-standard-1
for Windows or
other applications, such as Kubernetes nodes.
gcloud config set project VALUE gcloud config set compute/zone us-east1-b gcloud config set compute/region us-east1
You will use this instance to perform maintenance for the project.
Requirements:
- Name the instance
nucleus-jumphost
.- Use an
f1-micro
machine type.- Use the default image type (Debian Linux).
gcloud compute instances create nucleus-webserver1 --machine-type f1-micro gcloud compute instances delete nucleus-jumphost
The team is building an application that will use a service running on Kubernetes. You need to:
gcloud container clusters create nucleus-jumphost gcloud container clusters get-credentials nucleus-jumphost kubectl create deployment hello-app --image=gcr.io/google-samples/hello-app:2.0 kubectl expose deployment hello-app --type=LoadBalancer --port 8080 gcloud container clusters delete nucleus-jumphost
You will serve the site via nginx web servers, but you
want to ensure that the environment is fault-tolerant.
Create an HTTP load balancer with a managed instance group
of 2 nginx web servers
.
Use the following code to configure the web servers
;
the team will replace this with their own configuration later.
cat << EOF > startup.sh
apt-get update apt-get install -y nginx service nginx start sed -i -- 's/nginx/Google Cloud Platform - '"\$HOSTNAME"'/' /var/www/html/index.nginx-debian.html EOF
You need to:
Target Pool External TCP/UDP Network Load Balancing can use either a backend service or a target pool to define the group of backend instances that receive incoming traffic. This page describes configuration options for target pool backends for Network Load Balancing. When a network load balancer's forwarding rule directs traffic to a target pool, the load balancer chooses an instance from the target pool based on a hash of the source IP address, the source port, the destination IP address, and the destination port.
If you intend your target pool to contain a single virtual machine (VM), consider using the protocol forwarding feature instead of load balancing.
cat << EOF > startup.sh
apt-get update apt-get install -y nginx service nginx start sed -i -- 's/nginx/Google Cloud Platform - '"\$HOSTNAME"'/' /var/www/html/index.nginx-debian.html EOF
gcloud compute instance-templates create web-server-template \ --metadata-from-file startup-script=startup.sh \ --network nucleus-vpc \ --machine-type g1-small \ --region us-east1
gcloud compute instance-groups managed create web-server-group \ --base-instance-name web-server \ --size 2 \ --template web-server-template \ --region us-east1
gcloud compute firewall-rules create web-server-firewall \ --allow tcp:80 \ --network nucleus-vpc
gcloud compute http-health-checks create http-basic-check
gcloud compute instance-groups managed \ set-named-ports web-server-group \ --named-ports http:80 \ --region us-east1
gcloud compute backend-services create web-server-backend \ --protocol HTTP \ --http-health-checks http-basic-check \ --global
gcloud compute backend-services add-backend web-server-backend \ --instance-group web-server-group \ --instance-group-region us-east1 \ --global
gcloud compute url-maps create web-server-map \ --default-service web-server-backend
gcloud compute target-http-proxies create http-lb-proxy \ --url-map web-server-map
gcloud compute forwarding-rules create http-content-rule \ --global \ --target-http-proxy http-lb-proxy \ --ports 80
gcloud compute forwarding-rules list
Lastly needed to curl the backend server instances was required to verify that those severs were running. gcloud compute instances list curl IP
Course: Google Cloud Computing Foundations: Cloud Computing Fundamentals
Quest: Perform Foundational Infrastructure Tasks in Google Cloud
Lesson 1
Quiz
Lesson 2
Quiz
gcloud init
command is used to set up the default configuration of the Cloud SDK. It used to command is used to set up the user, default project, and the default region and zone of the Cloud SDK.Lesson 3
Quiz
Note From Lessons
A Google Cloud project is an organizing entity for your Google Cloud resources. It often contains resources and services; for example, it may hold a pool of virtual machines, a set of databases, and a network that connects them together. Projects also contain settings and permissions, which specify security rules and who has access to what resources.
Your project has a name, ID, and number. These identifiers are frequently used when interacting with Google Cloud services. You are working with one project to get experience with a specific service or feature of Google Cloud.
A Project ID is a unique identifier that is used to link Google Cloud resources and APIs to your specific project. Project IDs are unique across Google Cloud: there can be only one
qwiklabs-gcp-xxx....
, which makes it globally identifiable.Organizations use Google Cloud in different ways, so projects are a good method for organizing cloud computing services (by team or product, for example.)
gcloud config list project : List the project ID
There are seven categories of Google Cloud services: Compute: A variety of machine types that support any type of workload. The different computing options let you decide how much control you want over operational details and infrastructure. Storage: Data storage and database options for structured or unstructured, relational or nonrelational data. Networking: Services that balance application traffic and provision security rules. Cloud Operations: A suite of cross-cloud logging, monitoring, trace, and other service reliability tools. Tools: Services that help developers manage deployments and application build pipelines. Big Data: Services that allow you to process and analyze large datasets. Artificial Intelligence: A suite of APIs that run specific artificial intelligence and machine learning tasks on Google Cloud.
Google Cloud also contains a collection of permissions and roles that define who has access to what resources. You can use the Cloud Identity and Access Management (Cloud IAM) service to inspect and modify these roles and permissions.
Google Cloud APIs are a key part of Google Cloud. Like services, the 200+ APIs, in areas that range from business administration to machine learning, all easily integrate with Google Cloud projects and applications. APIs are application programming interfaces that you can call directly or via our client libraries. Cloud APIs use resource-oriented design principles. When you create your own Google Cloud projects, you will have to enable certain APIs yourself. Most Cloud APIs provide you with detailed information on your project’s usage of that API, including traffic levels, error rates, and even latencies, which helps you quickly triage problems with applications that use Google services.
Cloud Shell is an in-browser command prompt execution environment that allows you to enter commands at a terminal prompt in order to manage resources and services in your Google Cloud project. Cloud Shell lets you run all of your shell commands without leaving the Console and includes pre-installed command line tools. The
gcloud
command-line tool and other utilities you need are pre-installed in Cloud Shell, which allows you to get up and running quickly.The main Google Cloud toolkit is gcloud, which is used for many tasks on the platform, such as resource management and user authentication. https://cloud.google.com/sdk/gcloud/reference
gcloud -h Find out more gcloud config --help : To find out more about config command, and similarly about other commands gcloud config list : List of configurations in your environment gcloud config list --all : See all properties and their settings gcloud auth list: lists the credentialed accounts in your Google Cloud project
Google Compute Engine and Virtual Machines
How to connect to computing resources hosted on Google Cloud via Cloud Shell with the gcloud tool.
Certain Google Compute Engine resources live in regions or zones. A region is a specific geographical location where you can run your resources. Each region has one or more zones. For example, the us-central1 region denotes a region in the Central United States that has zones us-central1-a, us-central1-b, us-central1-c, and us-central1-f.
Resources that live in a zone are referred to as zonal resources. Virtual machine instances and persistent disks live in a zone. If you want to attach a persistent disk to a virtual machine instance, both resources must be in the same zone. Similarly, if you want to assign a static IP address to an instance, the instance must be in the same region as the static IP address.
To see what your default region and zone settings are, run the following commands: gcloud config get-value compute/zone gcloud config get-value compute/region
gcloud compute project-info describe --project : Identify your default region and zone. The default zone and region are in the metadata values. If they are missing then those defaults are not set.
use, export ZONE=
export
command to set environment variable. export PROJECT_ID=Create a Virtual Machine
gcloud compute instances create gcelab2 --machine-type n1-standard-2 --zone $ZONE
gcloud compute instances create nucleus-jumphost --machine-type f1-micro --zone us-east1-b Name the instance nucleus-jumphost. Use an f1-micro machine type. Use the default image type (Debian Linux). asia-east1-b Default zone set which was wrong
Command details: gcloud compute allows you to manage your Compute Engine resources in a format that's simpler than the Compute Engine API. instances create creates a new instance. gcelab2 is the name of the VM. The --machine-type flag specifies the machine type as n1-standard-2. The --zone flag specifies where the VM is created. If you omit the --zone flag, the gcloud tool can infer your desired zone based on your default properties. Other required instance settings, such as machine type and image, are set to default values if not specified in the create command.
gcloud compute instances create --help gcloud compute instances list: List the instances
gcloud components list : List your components sudo apt-get install google-cloud-sdk: Install a auto-complete gcloud component that makes working in the gcloud tool easier. gcloud beta interactive : Enable
gcloud compute ssh gcelab2 --zone $ZONE: SSH connect to VM gcelab2 in specific zone.
Compute Engine lets you create virtual machines that run different operating systems, including multiple flavors of Linux (Debian, Ubuntu, Suse, Red Hat, CoreOS) and Windows Server, on Google infrastructure. You can run thousands of virtual CPUs on a system that is designed to be fast and to offer strong consistency of performance.
Create virtual machine instances of various machine types using gcloud and connect an NGINX web server to your virtual machine.
gcloud compute instances create --help : To view default values
gcloud compute instances create gcelab2 --machine-type n1-standard-2 --zone us-central1-c : Create a VM instance with values. The new instance has these default values: The latest Debian 10 (buster) image. The n1-standard-2 machine type. In this lab, you can select one of these other machine types: n1-highmem-4 or n1-highcpu-4. When you're working on a project outside Qwiklabs, you can also specify a custom machine type. A root persistent disk with the same name as the instance; the disk is automatically attached to the instance.
gcloud compute ssh gcelab2 --zone us-central1-c : Connect to the instance sudo su - : Get root apt-get update, apt-get install nginx -y, ps auwx | grep nginx : NGINX installation in the VM
gcloud compute images list : List available OS images https://cloud.google.com/compute/docs/images#gcloud https://cloud.google.com/compute/docs/machine-types
App Engine
App Engine allows developers to focus on doing what they do best, writing code. The App Engine standard environment is based on container instances running on Google's infrastructure. Containers are preconfigured with one of several available runtimes (Java 7, Java 8, Python 2.7, Go and PHP). Each runtime also includes libraries that support App Engine Standard APIs. For many applications, the standard environment runtimes and libraries might be all you need. The App Engine standard environment makes it easy to build and deploy an application that runs reliably even under heavy load and with large amounts of data. It includes the following features: Persistent storage with queries, sorting, and transactions. Automatic scaling and load balancing. Asynchronous task queues for performing work outside the scope of a request. Scheduled tasks for triggering events at specified times or regular intervals. Integration with other Google cloud services and APIs. Applications run in a secure, sandboxed environment, allowing App Engine standard environment to distribute requests across multiple servers, and scaling servers to meet traffic demands. Your application runs within its own secure, reliable environment that is independent of the hardware, operating system, or physical location of the server.
Steps to create an app:
Cloud Functions
Cloud Functions removes the work of managing servers, configuring software, updating frameworks, and patching operating systems. The software and infrastructure are fully managed by Google so that you just add code. Furthermore, provisioning of resources happens automatically in response to events. This means that a function can scale from a few invocations a day to many millions of invocations without any work from you.
Cloud Functions is a serverless execution environment for building and connecting cloud services. With Cloud Functions you write simple, single-purpose functions that are attached to events emitted from your cloud infrastructure and services. Your Cloud Function is triggered when an event being watched is fired. Your code executes in a fully managed environment. There is no need to provision any infrastructure or worry about managing any servers.
Cloud Functions provides a connective layer of logic that lets you write code to connect and extend cloud services. Listen and respond to a file upload to Cloud Storage, a log change, or an incoming message on a Cloud Pub/Sub topic. Cloud Functions augments existing cloud services and allows you to address an increasing number of use cases with arbitrary programming logic. Cloud Functions have access to the Google Service Account credential and are thus seamlessly authenticated with the majority of Google Cloud services such as Datastore, Cloud Spanner, Cloud Translation API, Cloud Vision API, as well as many others.
Cloud Functions can be written in Node.js, Python, and Go, and are executed in language-specific runtime as well. You can take your Cloud Function and run it in any standard Node.js runtime which makes both portability and local testing a breeze.
Cloud events are things that happen in your cloud environment.These might be things like changes to data in a database, files added to a storage system, or a new virtual machine instance being created.
Events occur whether or not you choose to respond to them. You create a response to an event with a trigger. A trigger is a declaration that you are interested in a certain event or set of events. Binding a function to a trigger allows you to capture and act on events. For more information on creating triggers and associating them with your functions
Asynchronous workloads, for example lightweight ETL or cloud automations, like triggering application builds, no longer need their own server and a developer to wire it up. You simply deploy a Cloud Function bound to the event you want and you're done. The fine-grained, on-demand nature of Cloud Functions also makes it a perfect candidate for lightweight APIs and webhooks. Because there is automatic provisioning of HTTP endpoints when you deploy an HTTP Function, there is no complicated configuration required as there is with some other services.
https://google.qwiklabs.com/course_sessions/116499/labs/52680 https://cloud.google.com/functions/docs/concepts
To create a Node.js backgorund cloud function: Background Cloud Function to be triggered by Pub/Sub. This function is exported by index.js, and executed when the trigger topic receives a message.
When deploying a new function, you must specify
--trigger-topic
,--trigger-bucket
, or--trigger-http
. When deploying an update to an existing function, the function keeps the existing trigger unless otherwise specified. For now, we'll set the--trigger-topic
ashello_world
.Google Kubernetes Engine (GKE)
GKE provides a managed environment for deploying, managing, and scaling your containerized applications using Google infrastructure. The Kubernetes Engine environment consists of multiple machines (specifically Compute Engine instances) grouped to form a container cluster. In this lab, you get hands-on practice with container creation and application deployment with GKE.
GKE clusters are powered by the Kubernetes open source cluster management system. Kubernetes draws on the same design principles that run popular Google services and provides the same benefits: automatic management, monitoring and liveness probes for application containers, automatic scaling, rolling updates, and more.Kubernetes provides the mechanisms through which you interact with your container cluster. You use Kubernetes commands and resources to deploy and manage your applications, perform administrative tasks, set policies, and monitor the health of your deployed workloads.
When you run a GKE cluster, you also gain the benefit of advanced cluster management features that Google Cloud provides. These include: Load balancing for Compute Engine instances Node pools to designate subsets of nodes within a cluster for additional flexibility Automatic scaling of your cluster's node instance count Automatic upgrades for your cluster's node software Node auto-repair to maintain node health and availability Logging and Monitoring with Cloud Monitoring for visibility into your cluster
Deploy a containerized application with GKE
A cluster consists of at least one cluster master machine and multiple worker machines called nodes. Nodes are Compute Engine virtual machine (VM) instances that run the Kubernetes processes necessary to make them part of the cluster. GKE uses Kubernetes objects to create and manage your cluster's resources. Kubernetes provides the Deployment object for deploying stateless applications like web servers. Service objects define rules and load balancing for accessing your application from the internet.
Network and HTTP Load Balancers
Set the default region and zone for all resources
Create three Compute Engine VM instances with the same tag in the same zone, install Apache on them, then add a firewall rule that allows HTTP traffic to reach the instances.
The command to create VM's are below. Change the value
www1
to www2 and www3 for other two. cloud compute instances create www1 \ --image-family debian-9 \ --image-project debian-cloud \ --zone us-central1-a \ --tags network-lb-tag \ --metadata startup-script="#! /bin/bash sudo apt-get update sudo apt-get install apache2 -y sudo service apache2 restart echo "Page served from www1" | tee /var/www/html/index.html"Create an L4 network load balancer that points to the web servers
Create an HTTP load balancer
HTTP(S) Load Balancing is implemented on Google Front End (GFE). GFEs are distributed globally and operate together using Google's global network and control plane. You can configure URL rules to route some URLs to one set of instances and route other URLs to other instances. Requests are always routed to the instance group that is closest to the user, if that group has enough capacity and is appropriate for the request. If the closest group does not have enough capacity, the request is sent to the closest group that does have capacity.
To set up a load balancer with a Compute Engine backend, your VMs need to be in an instance group. The managed instance group provides VMs running the backend servers of an external HTTP load balancer. For this lab, backends serve their own hostnames.
gcloud compute instance-templates create lb-backend-template \ --region=us-east1 \ --network=default \ --subnet=default \ --tags=allow-health-check \ --image-family=debian-9 \ --image-project=debian-cloud \ --metadata=startup-script=cat << EOF > startup.sh
! /bin/bash
apt-get update apt-get install -y nginx service nginx start sed -i -- 's/nginx/Google Cloud Platform - '"\$HOSTNAME"'/' /var/www/html/index.nginx-debian.html EOF
Create a managed instance group based on the template: gcloud compute instance-groups managed create lb-backend-group --template=lb-backend-template --size=2 --zone=us-east1-b
Create the an ingress firewall rule that allows traffic from the Google Cloud health checking systems: gcloud compute firewall-rules create fw-allow-health-check \ --network=default \ --action=allow \ --direction=ingress \ --source-ranges=130.211.0.0/22,35.191.0.0/16 \ --target-tags=allow-health-check \ --rules=tcp:80
Set up a global static external IP address that your customers use to reach your load balancer gcloud compute addresses create lb-ipv4-1 --ip-version=IPV4 --global
Check IP address gcloud compute addresses describe lb-ipv4-1 --format="get(address)" --global
Create a healthcheck for the load balancer: gcloud compute health-checks create http http-basic-check --port 80
Create a backend service: gcloud compute backend-services create web-backend-service \ --protocol=HTTP \ --port-name=http \ --health-checks=http-basic-check \ --global
Add your instance group as the backend to the backend service: gcloud compute backend-services add-backend web-backend-service \ --instance-group=lb-backend-group \ --instance-group-zone=us-east1-b \ --global
Create a URL map to route the incoming requests to the default backend service: gcloud compute url-maps create web-map-http --default-service web-backend-service
Create a target HTTP proxy to route requests to your URL map gcloud compute target-http-proxies create http-lb-proxy \ --url-map web-map-http
Create a global forwarding rule to route incoming requests to the proxy: gcloud compute forwarding-rules create http-content-rule \ --address=lb-ipv4-1\ --global \ --target-http-proxy=http-lb-proxy \ --ports=80
https://cloud.google.com/load-balancing/docs/load-balancing-overview#a_closer_look_at_cloud_load_balancers
gcloud compute addresses create network-lb-ip-1 \ --region us-east1
gcloud compute target-pools create www-pool \ --region us-east1 --http-health-check basic-check