Jymit / CheatSheet

notes
2 stars 0 forks source link

Google Learning #10

Closed Jymit closed 4 years ago

Jymit commented 4 years ago

osdfir.blogspot.com

Google training

Google Cloud introduces VPC Flow Logs for more network visibility.
It provides network telemetry for GCP environments, creating logs in five-second intervals.
Users can collect telemetry from a specific VPC (virtual private cloud) network, a subnet, or a specific VM instance or virtual interface.
=> GSP001 Creating a Virtual Machine
gcloud auth list
gcloud config list project
GCE > new vm > n1-standard-2
gcloud compute instances create gcelab1 --machine-type n1-standard-2 --zone us-central1-c
apt-get update
apt-get install nginx -y
ps auwx | grep nginx
gcloud compute instances create gcelab2 --machine-type n1-standard-2 --zone us-central1-c
gcloud compute ssh gcelab2 --zone us-central1-c

=> GSP002 Getting Started with Cloud Shell & gcloud
Start Cloud Shell
Understanding Regions and Zones
Initializing Cloud SDK
The gcloud CLI is a part of the Google Cloud SDK. You need to download and install the SDK on your own system and initialize it (by running gcloud init) before you can use the gcloud command-line tool.
The gcloud CLI is automatically available in Cloud Shell. Since you're using Cloud Shell for this lab, you don't need to install gcloud manually.

HERE FOR REGIONS AND ZONES DOC:
https://cloud.google.com/compute/docs/regions-zones
europe-west2 is London, with zones a,b,c

Setting environment variables
export PROJECT_ID=qwiklabs-gcp-01-29c2063fd69a
export ZONE=europe-west2-a
echo $PROJECT_ID, $ZONE
Create a virtual machine with gcloud
gcloud compute instances create gcelab2 --machine-type n1-standard-2 --zone $ZONE
Using gcloud commands
gcloud -h
gcloud config --help
gcloud help config
gcloud config list
gcloud config list --all
gcloud components list
Auto-completion
gcloud components install beta
gcloud beta interactive
gcloud compute instances describe gcelab2
SSH into your vm instance
gcloud compute ssh gcelab2 --zone $ZONE
exit
Use the Home directory
cd $HOME
vi ./.bashrc
:wq
=> GSP007 Set Up Network and HTTP Load Balancers
gcloud auth list
gcloud config list project
gcloud config set compute/region europe-west2
gcloud config set compute/zone europe-west2-a
cat << EOF > startup.sh
#! /bin/bash
apt-get update
apt-get install -y nginx
service nginx start
sed -i -- 's/nginx/Google Cloud Platform - '"\$HOSTNAME"'/' /var/www/html/index.nginx-debian.html
EOF

gcloud compute instance-templates create nginx-template \
         --metadata-from-file startup-script=startup.sh

gcloud compute target-pools create nginx-pool

gcloud compute instance-groups managed create nginx-group \
         --base-instance-name nginx \
         --size 2 \
         --template nginx-template \
         --target-pool nginx-pool

gcloud compute instances list
gcloud compute firewall-rules create www-firewall --allow tcp:80
gcloud compute instances list
Now get on via a browser to the external IPs
Create a Network Load Balancer
Create an L3 network load balancer targeting your instance group:
gcloud compute forwarding-rules create nginx-lb \
         --region europe-west2 \
         --ports=80 \
         --target-pool nginx-pool
gcloud compute forwarding-rules list

Create a HTTP(s) Load Balancer
Health checks verify that the instance is responding to HTTP or HTTPS traffic:
gcloud compute http-health-checks create http-basic-check
Define an HTTP service and map a port name to the relevant port for the instance group. Now the load balancing service can forward traffic to the named port:
gcloud compute instance-groups managed \
       set-named-ports nginx-group \
       --named-ports http:80
Create a backend service:
gcloud compute backend-services create nginx-backend \
     --protocol HTTP --http-health-checks http-basic-check --global
Add the instance group into the backend service:
gcloud compute backend-services add-backend nginx-backend \
   --instance-group nginx-group \
   --instance-group-zone europe-west2-a \
   --global
Create a default URL map that directs all incoming requests to all your instances:
gcloud compute url-maps create web-map \
   --default-service nginx-backend
Create a target HTTP proxy to route requests to your URL map:
gcloud compute target-http-proxies create http-lb-proxy \
   --url-map web-map
Create a global forwarding rule to handle and route incoming requests. A forwarding rule sends traffic to a specific target HTTP or HTTPS proxy depending on the IP address, IP protocol, and port specified. The global forwarding rule does not support multiple ports:
gcloud compute forwarding-rules create http-content-rule \
        --global \
        --target-http-proxy http-lb-proxy \
        --ports 80
gcloud compute forwarding-rules list
Take note of the http-content-rule IP_ADDRESS for the forwarding rule, browse to it.
=> GSP112 Cloud (web) Security Scanner: Qwik Start
git clone https://github.com/GoogleCloudPlatform/python-docs-samples
cd python-docs-samples/appengine/standard_python37/hello_world
dev_appserver.py app.yaml
shell > web preview on :8080
gcloud app deploy
[7] europe-west 
gcloud app browse
https://qwiklabs-gcp-04-fa39a019c814.ew.r.appspot.com
Navigation menu > App Engine > Security scans
Enable API > Create scan > update > save > run.
=> GSP282 A Tour of Qwiklabs and the Google Cloud Platform
Compute: houses a variety of machine types that support any type of workload. The different computing options let you decide how involved you want to be with operational details and infrastructure amongst other things.
Storage: data storage and database options for structured or unstructured, relational or non relational data.
Networking: services that balance application traffic and provision security rules amongst other things.
Stackdriver: a suite of cross-cloud logging, monitoring, trace, and other service reliability tools.
Tools: services for developers managing deployments and application build pipelines.
Big Data: services that allow you to process and analyze large datasets.
Artificial Intelligence: a suite of APIs that run specific artificial intelligence and machine learning tasks on the Google Cloud platform.
=> GSP610 Fundamentals of Stackdriver Logging
gcloud auth list
gcloud config list project
git clone https://github.com/GoogleCloudPlatform/getting-started-python
cd getting-started-python/bookshelf
virtualenv -p python3 env
source env/bin/activate
pip3 install -r requirements.txt
gcloud app deploy
[7] europe-west  (supports standard and flexible)
Firestore > Datastore page > SWITCH TO NATIVE MODE, and click SWITCH MODES to confirm.
gcloud app deploy
gcloud app browse
https://qwiklabs-gcp-02-190869f1bc82.ew.r.appspot.com/
create a new book in the app
Select Navigation menu> Logging> Logs Viewer.
GAE Application/All logs/Any log level
In the Filter by label or text search dropdown, select Convert to advanced filter.
`
resource.type="gae_app"
resource.labels.module_id="default"
add protoPayload.latency>=0.01s
`
Still in Logs Viewer, in the select service dropdown, select GAE Application > Default Service > All version_id.
In the log list, click on the status "200" (in any row that has 200) and select Show matching entries. aka protoPayload.status: 200
Create a monitoring metric based on your filter:
User-defined Metrics section > view in Metrics Explorer.
From the left menu, select Monitoring Overview. The log metrics are shown in charts.
Stacked area format looks cool!
Navigation menu > Compute Engine > VM instances.
Create vm g1-small, with HTTP traffic allowed
Viewing audit logs in Activity Viewer
Click on GCP > Activity (https://console.cloud.google.com/home/activity)
Viewing audit logs in Cloud Logs Viewer. Navigation menu > Logging > Logs Viewer
GCE VM Instance > All instance_id
`
resource.type="gce_instance"
`
logs selector dropdown, select cloudaudit.googleapis.com/activity
`
resource.type="gce_instance"
logName="projects/qwiklabs-gcp-02-190869f1bc82/logs/cloudaudit.googleapis.com%2Factivity"
`
remove resource.type="gce_instance"
filter to find and or add by adv Filter
protoPayload.authenticationInfo.principalEmail="student-02-2645fa6a5e2f@qwiklabs.net"
Exporting logs > Creating an export job
Remove line 2, so we just have logName..
Create Sink > name: AuditLogs, service: BigQuery, dest: Create new BigQuery dataset and then name the new BigQuery dataset "AuditLogs" and click Create.
Viewing audit logs in BigQuery. Navigation menu > BigQuery.
find the VM and its auditlogs, now back to the VM, hit edit.
Check the checkbox for Enable connecting to serial ports.
Scroll down and check the checkbox to Allow HTTPS traffic.
Return to the BigQuery console (Navigation menu > BigQuery) and expand the AuditLogs dataset. You might need to refresh the page. You should see that a new cloudaudit table has been created in the dataset. Click the new table.
Click the new cloudaudit table, then click the Query Table button.
`
SELECT
timestamp,
resource.type,
protopayload_auditlog.authenticationInfo.principalEmail,
protopayload_auditlog.methodName
FROM `qwiklabs-gcp-02-190869f1bc82.AuditLogs.cloudaudit_googleapis_com_activity_20200505`
WHERE protopayload_auditlog.authenticationInfo.principalEmail = "student-02-2645fa6a5e2f@qwiklabs.net"
LIMIT 1000
`
=> GSP483 Logging with Stackdriver on Kubernetes Engine
gcloud auth list
gcloud config list project
Open a new session in Cloud Shell. Now open the code editor by clicking the icon in the top ribbon.
gcloud config set project qwiklabs-gcp-04-e9c7a084c5db
git clone https://github.com/GoogleCloudPlatform/gke-logging-sinks-demo
cd gke-logging-sinks-demo
gcloud config set compute/region europe-west
gcloud config set compute/zone europe-west
/home/student_04_33a238f10f2c/gke-logging-sinks-demo/terraform
Remove the provider version for the Terraform from the provider.tf script file.
From the left-hand menu, open the file /gke-logging-sinks-demo/terraform/provider.tf.
Set the version to ~> 2.19.0. After modification your provider.tf script file should look like:
....
provider "google" {
  project = var.project
  version = "~> 2.19.0"
}
Save the file. VI this stuff :shaka:
There are three Terraform files provided with this lab example. The first one, main.tf, is the starting point for Terraform. It describes the features that will be used, the resources that will be manipulated, and the outputs that will result. The second file is provider.tf, which indicates which cloud provider and version will be the target of the Terraform commands--in this case GCP.
The final file is variables.tf, which contains a list of variables that are used as inputs into Terraform. Any variables referenced in the main.tf that do not have defaults configured in variables.tf will result in prompts to the user at runtime.
You will make one small change to main.tf. From the left-hand menu, open the file /gke-logging-sinks-demo/terraform/main.tf. Scroll down to line 106 and find the "Create the Stackdriver Export Sink for Cloud Storage GKE Notifications" section.

Change the filter's resource.type from container to k8s_container.
Do the same for the bigquery-sink below on line 116. Ensure that these two export sync sections look like the following before moving on.
make create
Note: If you get deprecation warnings related to the zone varibale, please ignore it and move forward in the lab.
for me just now it didnt like europe-west, so:
gcloud config set compute/region us-central1
gcloud config set compute/zone us-central1-a
ftw
had to mv terraform.tfvars terraform.tfvars.bak
then
make create :)
make validate
Your output will look like:
kubeconfig entry generated....
Generating Logs
The sample application that Terraform deployed serves up a simple web page. Each time you open this application in your browser the application will publish log events to Stackdriver Logging. Refresh the page a few times to produce several log events.
Navigation menu > Network services > Load balancing > Load balancer details > Frontend
Logs in Stackdriver
Navigation menu > Logging > Kubernetes Container > stackdriver-logging > default (stackdriver-logging is the cluster and default is the namespace).
flip the view the see newest at top, visit the app website few times and see the logging.
Viewing Log Exports. The Terraform configuration built out two Log Export Sinks.
To view the sinks perform the following steps. Stackdriver -> Logging page.
(https://console.cloud.google.com/logs/exports)
You cna edit or create new sinks.
Logs in Cloud Storage. Log events can be stored in Cloud Storage, an object storage system suitable for archiving data. Policies can be configured for Cloud Storage buckets that, for instance, allow aging data to expire and be deleted while more recent data can be stored with a variety of storage classes affecting price and availability.
The Terraform configuration created a Cloud Storage Bucket named stackdriver-gke-logging- to which logs will be exported for medium to long-term archival. In this example, the Storage Class for the bucket is defined as Nearline because the logs should be infrequently accessed in a normal production environment (this will help to manage the costs of medium-term storage). In a production scenario, this bucket may also include a lifecycle policy that moves the content to Coldline storage for cheaper long-term storage of logs.
Storage > Browser > stackdriver-gke-logging-...
Logs in BigQuery. Navigation > BigQuery > gke_logs_dataset (check under here for the logs)
Click on Query Table
have a look at the data.
make teardown

Troubleshooting for your production environment.
1. The install script fails with a Permission denied when running Terraform.
gcloud auth application-default login.
2. Cloud Storage Bucket not populated Once the Terraform configuration is complete the Cloud Storage Bucket will be created, but it is not always populated immediately with log data from the Kubernetes Engine cluster. Give the process some time because it can take up to 2 to 3 hours before the first entries start appearing (https://cloud.google.com/logging/docs/export/using_exported_logs).
=> GSP151 Cloud SQL for MySQL: Qwik Start
Storage > SQL > Create Instance > MySQL > Second Generation
Connect to your instance using the mysql client in the Cloud Shell
gcloud sql connect myinstance --user=root
CREATE DATABASE guestbook;
USE guestbook;
CREATE TABLE entries (guestName VARCHAR(255), content VARCHAR(255),
    entryID INT NOT NULL AUTO_INCREMENT, PRIMARY KEY(entryID));
    INSERT INTO entries (guestName, content) values ("first guest", "I got here!");
INSERT INTO entries (guestName, content) values ("second guest", "Me too!");
SELECT * FROM entries;
=> GSP076 AI Platform: Qwik Start
What you will build
The sample builds a wide and deep model for predicting income category based on United States Census Income Dataset. The two income categories (also known as labels) are:

>50K — Greater than 50,000 dollars
<=50K — Less than or equal to 50,000 dollars
Wide and deep models use deep neural nets (DNNs) to learn high-level abstractions about complex features or interactions between such features. These models then combine the outputs from the DNN with a linear regression performed on simpler features. This provides a balance between power and speed that is effective on many structured data problems.

The sample defines the model using TensorFlow 1.x's prebuilt DNNCombinedLinearClassifier class. The sample defines the data transformations particular to the census dataset, then assigns these (potentially) transformed features to either the DNN or the linear portion of the model.

gcloud auth list
gcloud config list project
Create a virtual environment:
sudo apt-get update
sudo apt-get install virtualenv -y
virtualenv -p python3 venv
source venv/bin/activate
Clone the example repo:
git clone https://github.com/GoogleCloudPlatform/cloudml-samples.git
cd cloudml-samples/census/estimator
Develop and validate your training application locally:
mkdir data
gsutil -m cp gs://cloud-samples-data/ml-engine/census/data/* data/
export TRAIN_DATA=$(pwd)/data/adult.data.csv
export EVAL_DATA=$(pwd)/data/adult.test.csv
head data/adult.data.csv
Install dependencies:
pip install -r ../requirements.txt
pip install pandas==0.24.2
python -c "import tensorflow as tf; print('TensorFlow version {} is installed.'.format(tf.__version__))"
Run a local training job:
export MODEL_DIR=output
gcloud ai-platform local train \
    --module-name trainer.task \
    --package-path trainer/ \
    --job-dir $MODEL_DIR \
    -- \
    --train-files $TRAIN_DATA \
    --eval-files $EVAL_DATA \
    --train-steps 1000 \
    --eval-steps 100
Inspect the summary logs using Tensorboard:
tensorboard --logdir=$MODEL_DIR --port=8080
ls output/export/census/
gcloud ai-platform local predict \
--model-dir output/export/census/<timestamp> \
--json-instances ../test.json
1589229630
GCS:
PROJECT_ID=$(gcloud config list project --format "value(core.project)")
BUCKET_NAME=${PROJECT_ID}-mlengine
echo $BUCKET_NAME
REGION=us-central1
gsutil mb -l $REGION gs://$BUCKET_NAME
gsutil cp -r data gs://$BUCKET_NAME/data
TRAIN_DATA=gs://$BUCKET_NAME/data/adult.data.csv
EVAL_DATA=gs://$BUCKET_NAME/data/adult.test.csv
gsutil cp ../test.json gs://$BUCKET_NAME/data/test.json
TEST_JSON=gs://$BUCKET_NAME/data/test.json
single-instance trainer in the cloud:
JOB_NAME=census_single_1
OUTPUT_PATH=gs://$BUCKET_NAME/$JOB_NAME
gcloud ai-platform jobs submit training $JOB_NAME \
    --job-dir $OUTPUT_PATH \
    --runtime-version 1.14 \
    --python-version 3.5 \
    --module-name trainer.task \
    --package-path trainer/ \
    --region $REGION \
    -- \
    --train-files $TRAIN_DATA \
    --eval-files $EVAL_DATA \
    --train-steps 1000 \
    --eval-steps 100 \
    --verbosity DEBUG
gcloud ai-platform jobs stream-logs $JOB_NAME
gsutil ls -r $OUTPUT_PATH
tensorboard --logdir=$OUTPUT_PATH --port=8080
Deploy your model to support prediction:
MODEL_NAME=census
gcloud ai-platform models create $MODEL_NAME --regions=$REGION
gsutil ls -r $OUTPUT_PATH/export
MODEL_BINARIES=$OUTPUT_PATH/export/census/<timestamp>/
gcloud ai-platform versions create v1 \
--model $MODEL_NAME \
--origin $MODEL_BINARIES \
--runtime-version 1.14 \
--python-version 3.5
gcloud ai-platform models list
Send an online prediction request to your deployed model:
gcloud ai-platform predict \
--model $MODEL_NAME \
--version v1 \
--json-instances ../test.json
=>
=> Google Kubernetes Engine Best Practices: Security
=>

=> GSP480 How to Use a Network Policy on Google Kubernetes Engine
The Principle of Least Privilege
three workloads:

hello-server: this is a simple HTTP server with an internally-accessible endpoint
hello-client-allowed: this is a single pod that repeatedly attempts to access hello-server. The pod is labeled such that the Network Policy will allow it to connect to hello-server.
hello-client-blocked: this runs the same code as hello-client-allowed but the pod is labeled such that the Network Policy will not allow it to connect to hello-serve

three workloads:
1. hello-server: this is a simple HTTP server with an internally-accessible endpoint
2. hello-client-allowed: this is a single pod that repeatedly attempts to access hello-server. The pod is labeled such that the Network Policy will allow it to connect to hello-server.
3. hello-client-blocked: this runs the same code as hello-client-allowed but the pod is labeled such that the Network Policy will not allow it to connect to hello-server.

gcloud auth list
gcloud config list project
git clone https://github.com/GoogleCloudPlatform/gke-network-policy-demo.git
cd gke-network-policy-demo
gcloud config set compute/region us-central1
gcloud config set compute/zone us-central1-a
make setup-project
cat terraform/terraform.tfvars
sed -i 's/~> 2.10.0/~> 2.14.0/g' terraform/provider.tf
make tf-apply
gcloud container clusters describe gke-demo-cluster | grep  -A2 networkPolicy
gcloud compute ssh gke-demo-bastion
kubectl apply -f ./manifests/hello-app/
kubectl get pods
kubectl logs --tail 10 -f $(kubectl get pods -oname -l app=hello)
kubectl logs --tail 10 -f $(kubectl get pods -oname -l app=not-hello)
Restricting access with a Network Policy:
kubectl apply -f ./manifests/network-policy.yaml
kubectl logs --tail 10 -f $(kubectl get pods -oname -l app=not-hello)
Restricting namespaces with Network Policies:
kubectl delete -f ./manifests/network-policy.yaml
kubectl create -f ./manifests/network-policy-namespaced.yaml
kubectl logs --tail 10 -f $(kubectl get pods -oname -l app=hello)
kubectl -n hello-apps apply -f ./manifests/hello-app/hello-client.yaml
validate:
kubectl logs --tail 10 -f -n hello-apps $(kubectl get pods -oname -l app=hello -n hello-apps)
teardown:
exit
make teardown

=> GSP493 Using Role-based Access Control in Kubernetes Engine
gcloud config list project
gcloud config set compute/region us-central1
gcloud config set compute/zone us-central1-a
git clone https://github.com/GoogleCloudPlatform/gke-rbac-demo.git
cd gke-rbac-demo
Edit the file gke-rbac-demo/terraform/provider.tf using the Cloud Shell editor and update the version for Terraform to the latest stable version, 2.12.0
// Configures the default project and zone for underlying Google Cloud API calls
provider "google" {
  project = var.project
  zone    = var.zone
  version = "~> 2.12.0"
}
make create
Assigning permissions by user persona:
gcloud iam service-accounts list
gcloud compute instances list
gcloud compute ssh gke-tutorial-admin
kubectl apply -f ./manifests/rbac.yaml
gcloud compute ssh gke-tutorial-owner
kubectl create -n dev -f ./manifests/hello-server.yaml
kubectl create -n prod -f ./manifests/hello-server.yaml
kubectl create -n test -f ./manifests/hello-server.yaml
kubectl get pods -l app=hello-server --all-namespaces
gcloud compute ssh gke-tutorial-auditor
kubectl get pods -l app=hello-server --all-namespaces
kubectl get pods -l app=hello-server --namespace=dev
kubectl get pods -l app=hello-server --namespace=test
kubectl get pods -l app=hello-server --namespace=prod
kubectl create -n dev -f manifests/hello-server.yaml
kubectl delete deployment -n dev -l app=hello-server
Assigning API permissions to a cluster application:
kubectl apply -f manifests/pod-labeler.yaml
Diagnosing an RBAC misconfiguration:
kubectl get pods -l app=pod-labeler
kubectl describe pod -l app=pod-labeler | tail -n 20
kubectl logs -l app=pod-labeler
Fixing the serviceAccountName:
kubectl get pod -oyaml -l app=pod-labeler
grep serviceAccount pod-labeler-fix-1.yaml --color
admin vm:
kubectl apply -f manifests/pod-labeler-fix-1.yaml
kubectl get deployment pod-labeler -oyaml
insufficient privileges:
pod status:
kubectl get pods -l app=pod-labeler
pod logs:
kubectl logs -l app=pod-labeler
logging:
protoPayload.methodName="io.k8s.core.v1.pods.patch"
Identifying the application's role and permissions:
kubectl get rolebinding pod-labeler -oyaml
inspect role:
kubectl get role pod-labeler -oyaml
grep verbs manifests/pod-labeler-fix-2.yaml --color
kubectl apply -f manifests/pod-labeler-fix-2.yaml
see changes:
kubectl get role pod-labeler -oyaml
kubectl delete pod -l app=pod-labeler
kubectl get pods --show-labels
kubectl logs -l app=pod-labeler
teardown:
make teardown

.

Jymit commented 4 years ago
=>
=> Google Cloud Essentials
=>

=> GSP282 A Tour of Qwiklabs and the Google Cloud Platform
Compute: houses a variety of machine types that support any type of workload. The different computing options let you decide how involved you want to be with operational details and infrastructure amongst other things.
Storage: data storage and database options for structured or unstructured, relational or non relational data.
Networking: services that balance application traffic and provision security rules amongst other things.
Stackdriver: a suite of cross-cloud logging, monitoring, trace, and other service reliability tools.
Tools: services for developers managing deployments and application build pipelines.
Big Data: services that allow you to process and analyze large datasets.
Artificial Intelligence: a suite of APIs that run specific artificial intelligence and machine learning tasks on the Google Cloud platform.

=> GSP001 Creating a Virtual Machine
gcloud auth list
gcloud config list project
GCE > new vm > n1-standard-2
gcloud compute instances create gcelab1 --machine-type n1-standard-2 --zone us-central1-c
apt-get update
apt-get install nginx -y
ps auwx | grep nginx
gcloud compute instances create gcelab2 --machine-type n1-standard-2 --zone us-central1-c
gcloud compute ssh gcelab2 --zone us-central1-c

=> GSP093 Compute Engine: Qwik Start - Windows
Compute Engine > VM instances.
Windows Server 2012 R2 Datacenter, all other settings at their defaults.
gcloud config list project
gcloud compute instances get-serial-port-output instance-1 --zone us-central1-a
RDP into the Windows Server

=> GSP002 Getting Started with Cloud Shell & gcloud
Start Cloud Shell
Understanding Regions and Zones
Initializing Cloud SDK
The gcloud CLI is a part of the Google Cloud SDK. You need to download and install the SDK on your own system and initialize it (by running gcloud init) before you can use the gcloud command-line tool.
The gcloud CLI is automatically available in Cloud Shell. Since you're using Cloud Shell for this lab, you don't need to install gcloud manually.

HERE FOR REGIONS AND ZONES DOC:
https://cloud.google.com/compute/docs/regions-zones
europe-west2 is London, with zones a,b,c

Setting environment variables
export PROJECT_ID=qwiklabs-gcp-01-29c2063fd69a
export ZONE=europe-west2-a
echo $PROJECT_ID, $ZONE
Create a virtual machine with gcloud
gcloud compute instances create gcelab2 --machine-type n1-standard-2 --zone $ZONE
Using gcloud commands
gcloud -h
gcloud config --help
gcloud help config
gcloud config list
gcloud config list --all
gcloud components list
Auto-completion
gcloud components install beta
gcloud beta interactive
gcloud compute instances describe gcelab2
SSH into your vm instance
gcloud compute ssh gcelab2 --zone $ZONE
exit
Use the Home directory
cd $HOME
vi ./.bashrc
:wq

=> GSP100 Kubernetes Engine: Qwik Start
gcloud auth list
gcloud config list project
gcloud config set compute/zone us-central1-a
gcloud container clusters create moshimoshi
gcloud container clusters get-credentials moshimoshi
kubectl create deployment hello-server --image=gcr.io/google-samples/hello-app:1.0
kubectl expose deployment hello-server --type=LoadBalancer --port 8080
kubectl get service
http://104.198.66.36:8080/
gcloud container clusters delete moshimoshi

=> GSP007 Set Up Network and HTTP Load Balancers
gcloud auth list
gcloud config list project
gcloud config set compute/region europe-west2
gcloud config set compute/zone europe-west2-a
cat << EOF > startup.sh
#! /bin/bash
apt-get update
apt-get install -y nginx
service nginx start
sed -i -- 's/nginx/Google Cloud Platform - '"\$HOSTNAME"'/' /var/www/html/index.nginx-debian.html
EOF

gcloud compute instance-templates create nginx-template \
         --metadata-from-file startup-script=startup.sh

gcloud compute target-pools create nginx-pool

gcloud compute instance-groups managed create nginx-group \
         --base-instance-name nginx \
         --size 2 \
         --template nginx-template \
         --target-pool nginx-pool

gcloud compute instances list
gcloud compute firewall-rules create www-firewall --allow tcp:80
gcloud compute instances list
Now get on via a browser to the external IPs
Create a Network Load Balancer
Create an L3 network load balancer targeting your instance group:
gcloud compute forwarding-rules create nginx-lb \
         --region europe-west2 \
         --ports=80 \
         --target-pool nginx-pool
gcloud compute forwarding-rules list

Create a HTTP(s) Load Balancer
Health checks verify that the instance is responding to HTTP or HTTPS traffic:
gcloud compute http-health-checks create http-basic-check
Define an HTTP service and map a port name to the relevant port for the instance group. Now the load balancing service can forward traffic to the named port:
gcloud compute instance-groups managed \
       set-named-ports nginx-group \
       --named-ports http:80
Create a backend service:
gcloud compute backend-services create nginx-backend \
     --protocol HTTP --http-health-checks http-basic-check --global
Add the instance group into the backend service:
gcloud compute backend-services add-backend nginx-backend \
   --instance-group nginx-group \
   --instance-group-zone europe-west2-a \
   --global
Create a default URL map that directs all incoming requests to all your instances:
gcloud compute url-maps create web-map \
   --default-service nginx-backend
Create a target HTTP proxy to route requests to your URL map:
gcloud compute target-http-proxies create http-lb-proxy \
   --url-map web-map
Create a global forwarding rule to handle and route incoming requests. A forwarding rule sends traffic to a specific target HTTP or HTTPS proxy depending on the IP address, IP protocol, and port specified. The global forwarding rule does not support multiple ports:
gcloud compute forwarding-rules create http-content-rule \
        --global \
        --target-http-proxy http-lb-proxy \
        --ports 80
gcloud compute forwarding-rules list
Take note of the http-content-rule IP_ADDRESS for the forwarding rule, browse to it.

=> GSP313 Google Cloud Essentials: Challenge Lab
Task 1: Create a project jumphost instance
Task 2: Create a Kubernetes service cluster
Task 3: Setup an HTTP load balancer
Jymit commented 4 years ago
=>
=> Security & Identity Fundamentals
=>

=> GSP064 Cloud IAM: Qwik Start
Sign in to GCP Console as the first user
Sign in to GCP Console as the second user
Create a bucket, add a txt file, rename it
gsutil mv -p gs://james4n6_memes/sample.txt gs://james4n6_memes/sample.txt
delete user 2 role in IAM, check
gsutil ls gs://[YOUR_BUCKET_NAME]
gs://[YOUR_BUCKET_NAME]/sample.txt
Give user 2 Storage > Storage Object Viewer role

=> GSP190 IAM Custom Roles
Activate Cloud Shell
gcloud auth list
gcloud config list project
In the Cloud IAM world, permissions are represented in the form:
<service>.<resource>.<verb>
DEVSHELL_PROJECT_ID=qwiklabs-gcp-04-2fdbed2742ad
gcloud iam list-testable-permissions //cloudresourcemanager.googleapis.com/projects/$DEVSHELL_PROJECT_ID
gcloud iam roles describe [ROLE_NAME]
gcloud iam list-grantable-roles
gcloud iam roles create
Create your role definition YAML file
vi role-definition.yaml
title: "Role Editor"
description: "Edit access for App Versions"
stage: "ALPHA"
includedPermissions:
- appengine.versions.create
- appengine.versions.delete
gcloud iam roles create editor --project $DEVSHELL_PROJECT_ID \
--file role-definition.yaml
Create a custom role using flags
gcloud iam roles create viewer --project $DEVSHELL_PROJECT_ID \
--title "Role Viewer" --description "Custom role description." \
--permissions compute.instances.get,compute.instances.list --stage ALPHA
gcloud iam roles list --project $DEVSHELL_PROJECT_ID
gcloud iam roles list
gcloud iam roles describe Role Editor --project $DEVSHELL_PROJECT_ID
vi new-role-definition.yaml
Update with file:
gcloud iam roles update editor --project $DEVSHELL_PROJECT_ID \
--file new-role-definition.yaml
Update with flag:
gcloud iam roles update viewer --project $DEVSHELL_PROJECT_ID \
--add-permissions storage.buckets.get,storage.buckets.list
disable:
gcloud iam roles update viewer --project $DEVSHELL_PROJECT_ID \
--stage DISABLED
delete:
gcloud iam roles delete viewer --project $DEVSHELL_PROJECT_ID
undelete:
gcloud iam roles undelete viewer --project $DEVSHELL_PROJECT_ID

=> GSP199 Service Accounts and Roles: Fundamentals
A service account is a special Google account that belongs to your application or a virtual machine (VM) instead of an individual end user. Your application uses the service account to call the Google API of a service, so that the users aren't directly involved.
gcloud auth list
gcloud config list project
DEVSHELL_PROJECT_ID=qwiklabs-gcp-00-2dd8617a8e10
User-managed service accounts:
PROJECT_NUMBER-compute@developer.gserviceaccount.com
Google-managed service accounts:
PROJECT_NUMBER@cloudservices.gserviceaccount.com
Creating and Managing Service Accounts
Creating a service account:
gcloud iam service-accounts create my-sa-123 --display-name "my service account"
gcloud projects add-iam-policy-binding $DEVSHELL_PROJECT_ID \
    --member serviceAccount:my-sa-123@$DEVSHELL_PROJECT_ID.iam.gserviceaccount.com --role roles/editor
IAM & Admin > IAM > Service accounts and click on + Create Service Account:
Service account name: bigquery-qwiklab
Role: BigQuery Data Viewer and BigQuery User
Compute Engine > VM Instances > Create:
Name    bigquery-instance
Region  us-central1(Iowa)
Zone    us-central1-a
Machine Type    1 vCPU (n1-standard-1)
Boot Disk   Debian GNU/Linux 9 (stretch)
Service account bigquery-qwiklab
SSH:
sudo apt-get update
sudo apt-get install virtualenv -y
virtualenv -p python3 venv
source venv/bin/activate
sudo apt-get install -y git python3-pip
pip install google-cloud-bigquery
pip install pandas

echo "
from google.auth import compute_engine
from google.cloud import bigquery

credentials = compute_engine.Credentials(
    service_account_email='YOUR_SERVICE_ACCOUNT')

query = '''
SELECT
  year,
  COUNT(1) as num_babies
FROM
  publicdata.samples.natality
WHERE
  year > 2000
GROUP BY
  year
'''

client = bigquery.Client(
    project='YOUR_PROJECT_ID',
    credentials=credentials)
print(client.query(query).to_dataframe())
" > query.py

sed -i -e "s/YOUR_PROJECT_ID/$(gcloud config get-value project)/g" query.py
cat query.py

sed -i -e "s/YOUR_SERVICE_ACCOUNT/bigquery-qwiklab@$(gcloud config get-value project).iam.gserviceaccount.com/g" query.py
cat query.py

python query.py

=> GSP698 Securing Google Cloud with CFT Scorecard
Setting up CFT Scorecard
Running a CFT Scorecard assessment
Adding new CFT Scorecard policy
CAI stands for Cloud Asset Inventory
gcloud auth list
gcloud config list project
export GOOGLE_PROJECT=$DEVSHELL_PROJECT_ID
export CAI_BUCKET_NAME=cai-$GOOGLE_PROJECT
CFT scorecard has two dependencies:
Cloud Asset Inventory
Policy Library Proceed to enable Cloud Asset API in your project:
gcloud services enable cloudasset.googleapis.com \
    --project $GOOGLE_PROJECT

git clone https://github.com/forseti-security/policy-library.git
cp policy-library/samples/storage_blacklist_public.yaml policy-library/policies/constraints/
gsutil mb -l us-central1 -p $GOOGLE_PROJECT gs://$CAI_BUCKET_NAME

Collect the data using Cloud Asset Inventory (CAI)
# Export resource data
gcloud asset export \
    --output-path=gs://$CAI_BUCKET_NAME/resource_inventory.json \
    --content-type=resource \
    --project=$GOOGLE_PROJECT

# Export IAM data
gcloud asset export \
    --output-path=gs://$CAI_BUCKET_NAME/iam_inventory.json \
    --content-type=iam-policy \
    --project=$GOOGLE_PROJECT

Analyze the CAI data with CFT Scorecard
curl -o cft https://storage.googleapis.com/cft-cli/latest/cft-linux-amd64
# make executable
chmod +x cft
./cft scorecard --policy-path=policy-library/ --bucket=$CAI_BUCKET_NAME
Adding more constraints to CFT Scorecard
# Add a new policy to blacklist the IAM Owner Role
cat > policy-library/policies/constraints/iam_whitelist_owner.yaml << EOF
apiVersion: constraints.gatekeeper.sh/v1alpha1
kind: GCPIAMAllowedBindingsConstraintV1
metadata:
  name: whitelist_owner
  annotations:
    description: List any users granted Owner
spec:
  severity: high
  match:
    target: ["organization/*"]
    exclude: []
  parameters:
    mode: whitelist
    assetType: cloudresourcemanager.googleapis.com/Project
    role: roles/owner
    members:
    - "serviceAccount:admiral@qwiklabs-services-prod.iam.gserviceaccount.com"
EOF

./cft scorecard --policy-path=policy-library/ --bucket=$CAI_BUCKET_NAME
export USER_ACCOUNT="$(gcloud config get-value core/account)"
export PROJECT_NUMBER=$(gcloud projects describe $GOOGLE_PROJECT --format="get(projectNumber)")

# Add a new policy to whitelist the IAM Editor Role
cat > policy-library/policies/constraints/iam_identify_outside_editors.yaml << EOF
apiVersion: constraints.gatekeeper.sh/v1alpha1
kind: GCPIAMAllowedBindingsConstraintV1
metadata:
  name: identify_outside_editors
  annotations:
    description: list any users outside the organization granted Editor
spec:
  severity: high
  match:
    target: ["organization/*"]
    exclude: []
  parameters:
    mode: whitelist
    assetType: cloudresourcemanager.googleapis.com/Project
    role: roles/editor
    members:
    - "user:$USER_ACCOUNT"
    - "serviceAccount:*$PROJECT_NUMBER*gserviceaccount.com"
    - "serviceAccount:$GOOGLE_PROJECT*gserviceaccount.com"
EOF
./cft scorecard --policy-path=policy-library/ --bucket=$CAI_BUCKET_NAME

=> GSP193 VPC Network Peering
Google Cloud Platform (GCP) Virtual Private Cloud (VPC) Network Peering allows private connectivity across two VPC networks regardless of whether or not they belong to the same project or the same organization.
VPC Network Peering allows you to build SaaS (Software-as-a-Service) ecosystems in GCP, making services available privately across different VPC networks within and across organizations, allowing workloads to communicate in private space.

VPC Network Peering is useful for:
Organizations with several network administrative domains.
Organizations that want to peer with other organizations.

VPC Network Peering gives you several advantages over using external IP addresses or VPNs to connect networks, including:

Network Latency: Private networking offeres lower latency than public IP networking.

Network Security: Service owners do not need to have their services exposed to the public Internet and deal with its associated risks.

Network Cost: Networks that are peered can use internal IPs to communicate and save GCP egress bandwidth costs. Regular network pricing still applies to all traffic.

gcloud auth list
gcloud config list project
gcloud config set project <PROJECT_ID2>
Project-A, first cloud shell and create a custom network:
gcloud compute networks create network-a --subnet-mode custom
subnet:
gcloud compute networks subnets create network-a-central --network network-a \
    --range 10.0.0.0/16 --region us-central1
VM:
gcloud compute instances create vm-a --zone us-central1-a --network network-a --subnet network-a-central
enable ssh and icmp:
gcloud compute firewall-rules create network-a-fw --network network-a --allow tcp:22,icmp

Project-B, first cloud shell and create a custom network:
gcloud compute networks create network-b --subnet-mode custom
gcloud compute networks subnets create network-b-central --network network-b \
    --range 10.8.0.0/16 --region us-central1
gcloud compute instances create vm-b --zone us-central1-a --network network-b --subnet network-b-central
gcloud compute firewall-rules create network-b-fw --network network-b --allow tcp:22,icmp

VPC Network > VPC network peering > Create connection >
Name, Network, Peering VPC  network, project ID, VPN network name, create.
VPC Network Peering becomes ACTIVE and routes are exchanged As soon as the peering moves to an ACTIVE state, traffic flows are set up.

gcloud compute routes list --project qwiklabs-gcp-04-71623b00f5a9

Navigation Menu > Compute Engine > VM instances > Copy the INTERNAL_IP for vm-a.
open project B, vm-b and ssh into it,
ping -c 5

=> GSP499 User Authentication: Identity-Aware Proxy
Identity-Aware Proxy (IAP) is a Google Cloud Platform service that intercepts web requests sent to your application, authenticates the user making the request using the Google Identity Service, and only lets the requests through if they come from a user you authorize. In addition, it can modify the request headers to include information about the authenticated user.
gcloud auth list
gcloud config list project
git clone https://github.com/googlecodelabs/user-authentication-with-iap.git
cd user-authentication-with-iap
cd 1-HelloWorld
cat main.py
gcloud app deploy
[7] europe-west   (supports standard and flexible)
gcloud app browse
Restrict Access with IAP
Security > IAP > CONFIGURE CONSENT SCREEN
https://qwiklabs-gcp-00-92ba674fc1e8.ew.r.appspot.com/
/_gcp_iap/clear_login_cookie
cd app2
gcloud app deploy
gcloud app browse
curl -X GET <your-url-here> -H "X-Goog-Authenticated-User-Email: totally fake email"

IAP on and off
JWT
gcloud app deploy
gcloud app browse
JSON Web Token (JWT) is a compact URL-safe means of representing claims to be transferred between two parties.

=> GSP079 Getting Started with Cloud KMS
gcloud auth list
gcloud config list project
Create your own Cloud Storage bucket.
BUCKET_NAME=james_enron_corpus
gsutil mb gs://${BUCKET_NAME}
gsutil cp gs://enron_emails/allen-p/inbox/1. .
tail 1.
gcloud services enable cloudkms.googleapis.com
In order to encrypt the data, you need to create a KeyRing and a CryptoKey. KeyRings are useful for grouping keys. Keys can be grouped by environment (like test, staging, and prod) or by some other conceptual grouping. For this lab, your KeyRing will be called test and your CryptoKey will be called qwiklab.
KEYRING_NAME=test CRYPTOKEY_NAME=qwiklab
gcloud kms keyrings create $KEYRING_NAME --location global
gcloud kms keys create $CRYPTOKEY_NAME --location global \
      --keyring $KEYRING_NAME \
      --purpose encryption
Navigation menu > IAM & Admin > Cryptogrphic keys > Go to Key Management:
Encrypt data:
PLAINTEXT=$(cat 1. | base64 -w0)
curl -v "https://cloudkms.googleapis.com/v1/projects/$DEVSHELL_PROJECT_ID/locations/global/keyRings/$KEYRING_NAME/cryptoKeys/$CRYPTOKEY_NAME:encrypt" \
  -d "{\"plaintext\":\"$PLAINTEXT\"}" \
  -H "Authorization:Bearer $(gcloud auth application-default print-access-token)"\
  -H "Content-Type: application/json"

curl -v "https://cloudkms.googleapis.com/v1/projects/$DEVSHELL_PROJECT_ID/locations/global/keyRings/$KEYRING_NAME/cryptoKeys/$CRYPTOKEY_NAME:encrypt" \
    -d "{\"plaintext\":\"$PLAINTEXT\"}" \
    -H "Authorization:Bearer $(gcloud auth application-default print-access-token)"\
    -H "Content-Type:application/json" \
  | jq .ciphertext -r > 1.encrypted

curl -v "https://cloudkms.googleapis.com/v1/projects/$DEVSHELL_PROJECT_ID/locations/global/keyRings/$KEYRING_NAME/cryptoKeys/$CRYPTOKEY_NAME:decrypt" \
    -d "{\"ciphertext\":\"$(cat 1.encrypted)\"}" \
    -H "Authorization:Bearer $(gcloud auth application-default print-access-token)"\
    -H "Content-Type:application/json" \
  | jq .plaintext -r | base64 -d

gsutil cp 1.encrypted gs://${BUCKET_NAME}
Configure IAM Permissions
USER_EMAIL=$(gcloud auth list --limit=1 2>/dev/null | grep '@' | awk '{print $2}')
assign that user the ability to manage KMS resources.:
gcloud kms keyrings add-iam-policy-binding $KEYRING_NAME \
    --location global \
    --member user:$USER_EMAIL \
    --role roles/cloudkms.admin
Without the cloudkms.cryptoKeyEncrypterDecrypter permission, the authorized user will not be able to use the keys to encrypt or decrypt data:
gcloud kms keyrings add-iam-policy-binding $KEYRING_NAME \
    --location global \
    --member user:$USER_EMAIL \
    --role roles/cloudkms.cryptoKeyEncrypterDecrypter
Back up data on the Command Line:
gsutil -m cp -r gs://enron_emails/allen-p .
MYDIR=allen-p
FILES=$(find $MYDIR -type f -not -name "*.encrypted")
for file in $FILES; do
  PLAINTEXT=$(cat $file | base64 -w0)
  curl -v "https://cloudkms.googleapis.com/v1/projects/$DEVSHELL_PROJECT_ID/locations/global/keyRings/$KEYRING_NAME/cryptoKeys/$CRYPTOKEY_NAME:encrypt" \
    -d "{\"plaintext\":\"$PLAINTEXT\"}" \
    -H "Authorization:Bearer $(gcloud auth application-default print-access-token)" \
    -H "Content-Type:application/json" \
  | jq .ciphertext -r > $file.encrypted
done
gsutil -m cp allen-p/inbox/*.encrypted gs://${BUCKET_NAME}/allen-p/inbox
View Cloud Audit Logs

=> GSP178 Setting up a Private Kubernetes Cluster
gcloud auth list
gcloud config list project
gcloud config set compute/zone us-central1-a
gcloud beta container clusters create private-cluster \
    --enable-private-nodes \
    --master-ipv4-cidr 172.16.0.16/28 \
    --enable-ip-alias \
    --create-subnetwork ""
gcloud compute networks subnets list --network default
gke-private-cluster-subnet-fbd51ea9
gcloud compute networks subnets describe gke-private-cluster-subnet-fbd51ea9 --region us-central1
gcloud compute instances create source-instance --zone us-central1-a --scopes 'https://www.googleapis.com/auth/cloud-platform'
35.188.97.197
Get the <External_IP> of the source-instance with:
gcloud compute instances describe source-instance --zone us-central1-a | grep natIP
gcloud container clusters update private-cluster \
    --enable-master-authorized-networks \
    --master-authorized-networks 35.188.97.197/32
gcloud compute ssh source-instance --zone us-central1-a
gcloud components install kubectl
gcloud container clusters get-credentials private-cluster --zone us-central1-a
kubectl get nodes --output yaml | grep -A4 addresses
kubectl get nodes --output wide
exit
gcloud container clusters delete private-cluster --zone us-central1-a
Creating a private cluster that uses a custom subnetwork:
gcloud compute networks subnets create my-subnet \
    --network default \
    --range 10.0.4.0/22 \
    --enable-private-ip-google-access \
    --region us-central1 \
    --secondary-range my-svc-range=10.0.32.0/20,my-pod-range=10.4.0.0/14

Create a private cluster that uses your subnetwork:
gcloud beta container clusters create private-cluster2 \
    --enable-private-nodes \
    --enable-ip-alias \
    --master-ipv4-cidr 172.16.0.32/28 \
    --subnetwork my-subnet \
    --services-secondary-range-name my-svc-range \
    --cluster-secondary-range-name my-pod-range
Authorize your external address range, replacing [MY_EXTERNAL_RANGE] with the CIDR range of the external addresses from the previous output:
gcloud container clusters update private-cluster2 \
    --enable-master-authorized-networks \
    --master-authorized-networks [MY_EXTERNAL_RANGE]
gcloud compute ssh source-instance --zone us-central1-a
gcloud container clusters get-credentials private-cluster2 --zone us-central1-a
kubectl get nodes --output yaml | grep -A4 addresses
Jymit commented 4 years ago
=>
=> Kubernetes in Google Cloud
=>

=> GSP055 Introduction to Docker
gcloud auth list
gcloud config list project
docker run hello-world
docker images
docker run hello-world
docker ps
docker ps -a
mkdir test && cd test
cat > Dockerfile <<EOF
# Use an official Node runtime as the parent image
FROM node:6

# Set the working directory in the container to /app
WORKDIR /app

# Copy the current directory contents into the container at /app
ADD . /app

# Make the container's port 80 available to the outside world
EXPOSE 80

# Run app.js using node when the container launches
CMD ["node", "app.js"]
EOF

cat > app.js <<EOF
const http = require('http');

const hostname = '0.0.0.0';
const port = 80;

const server = http.createServer((req, res) => {
    res.statusCode = 200;
      res.setHeader('Content-Type', 'text/plain');
        res.end('Hello World\n');
});

server.listen(port, hostname, () => {
    console.log('Server running at http://%s:%s/', hostname, port);
});

process.on('SIGINT', function() {
    console.log('Caught interrupt signal and will exit');
    process.exit();
});
EOF

docker build -t node-app:0.1 .
docker images
docker run -p 4000:80 --name my-app node-app:0.1
curl http://localhost:4000
docker stop my-app && docker rm my-app
docker run -p 4000:80 --name my-app -d node-app:0.1
docker ps
docker logs [container_id]
cd test
docker build -t node-app:0.2 .
docker run -p 8080:80 --name my-app-2 -d node-app:0.2
docker ps
curl http://localhost:8080
curl http://localhost:4000
docker logs -f [container_id]
docker exec -it [container_id] bash
ls
exit
docker inspect --format='{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' [container_id]
gcloud config list project
docker tag node-app:0.2 gcr.io/[project-id]/node-app:0.2
docker images
docker push gcr.io/[project-id]/node-app:0.2
(GCR is Google Cloud Platform's private Docker image registry offering. It works with Google Container Engine clusters and Google Compute Engine instances out-of-the box without setting up any authentication)
docker stop $(docker ps -q)
docker rm $(docker ps -aq)
docker rmi node-app:0.2 gcr.io/[project-id]/node-app node-app:0.1
docker rmi node:6
docker rmi $(docker images -aq) # remove remaining images
docker images
docker pull gcr.io/[project-id]/node-app:0.2
docker run -p 4000:80 -d gcr.io/[project-id]/node-app:0.2
curl http://localhost:4000

=> GSP100 Kubernetes Engine: Qwik Start
gcloud auth list
gcloud config list project
gcloud config set compute/zone us-central1-a
gcloud container clusters create moshimoshi
gcloud container clusters get-credentials moshimoshi
kubectl create deployment hello-server --image=gcr.io/google-samples/hello-app:1.0
kubectl expose deployment hello-server --type=LoadBalancer --port 8080
kubectl get service
http://104.198.66.36:8080/
gcloud container clusters delete moshimoshi

=> GSP021 Orchestrating the Cloud with Kubernetes
gcloud config list project
GKE:
gcloud config set compute/zone us-central1-b
gcloud container clusters create io
sample code:
git clone https://github.com/googlecodelabs/orchestrate-with-kubernetes.git
cd orchestrate-with-kubernetes/kubernetes
ls
quick demo:
kubectl create deployment nginx --image=nginx:1.10.0
kubectl get pods
kubectl expose deployment nginx --port 80 --type LoadBalancer
kubectl get services
curl http://<External IP>:80
Pods:
cat pods/monolith.yaml
kubectl create -f pods/monolith.yaml
kubectl get pods
kubectl describe pods monolith
In the 2nd terminal, run this command to set up port-forwarding:
kubectl port-forward monolith 10080:80
curl http://127.0.0.1:10080
curl http://127.0.0.1:10080/secure
curl -u user http://127.0.0.1:10080/login
TOKEN=$(curl http://127.0.0.1:10080/login -u user|jq -r '.token')
curl -H "Authorization: Bearer $TOKEN" http://127.0.0.1:10080/secure
kubectl logs monolith
Open a 3rd terminal and use the -f flag to get a stream of the logs happening in real-time:
kubectl logs -f monolith
curl http://127.0.0.1:10080
kubectl exec monolith --stdin --tty -c monolith /bin/sh
ping -c 3 google.com
exit
Services:
cd ~/orchestrate-with-kubernetes/kubernetes
cat pods/secure-monolith.yaml
kubectl create secret generic tls-certs --from-file tls/
kubectl create configmap nginx-proxy-conf --from-file nginx/proxy.conf
kubectl create -f pods/secure-monolith.yaml
cat services/monolith.yaml
kubectl create -f services/monolith.yaml
gcloud compute firewall-rules create allow-monolith-nodeport \
  --allow=tcp:31000
gcloud compute instances list
curl -k https://<EXTERNAL_IP>:31000
Add labels to pods:
kubectl get pods -l "app=monolith"
kubectl get pods -l "app=monolith,secure=enabled"
kubectl label pods secure-monolith 'secure=enabled'
kubectl get pods secure-monolith --show-labels
kubectl describe services monolith | grep Endpoints
gcloud compute instances list
curl -k https://<EXTERNAL_IP>:31000
Creating deployment:
cat deployments/auth.yaml
kubectl create -f deployments/auth.yaml
kubectl create -f services/auth.yaml
kubectl create -f deployments/hello.yaml
kubectl create -f services/hello.yaml
kubectl create configmap nginx-frontend-conf --from-file=nginx/frontend.conf
kubectl create -f deployments/frontend.yaml
kubectl create -f services/frontend.yaml
kubectl get services frontend
curl -k https://<EXTERNAL-IP>

=> GSP053 Managing Deployments Using Kubernetes Engine

=> GSP051 Continuous Delivery with Jenkins in Kubernetes Engine

=> GSP318 Kubernetes in Google Cloud: Challenge Lab
Creating Docker images on a host.
Running Docker containers on a host.
Storing Docker images in the Google Container Repository (GCR).
Deploying GCR images on Kubernetes.
Pushing updates onto Kubernetes.
Automating deployments to Kubernetes using Jenkins.
Jymit commented 4 years ago
=>
=> Security in Google Cloud Platform Specialization
=>

=> 1 Google Cloud Platform Fundamentals: Core Infrastructure
- GCP, GCE, GCS, GKE, GAE, BQ, ML
=> 2 Managing Security in Google Cloud Platform
- Cloud ID, IAM, VPC sec
=> 3 Security Best Practices in Google Cloud
- Service accounts, IAM roles, GCE best practices
=> 4 Mitigating Security Vulnerabilities on Google Cloud Platform
Protecting against Distributed Denial of Service Attacks
- Cloud Armor (App, DDOS defense), DLP API, Security Command Center, Stackdriver Monitoring and Logging, Forseti
Jymit commented 4 years ago

=>
=> Professional Cloud Security Engineer
=>
https://cloud.google.com/certification/guides/cloud-security-engineer
https://cloud.google.com/certification/practice-exam/cloud-security-engineer

https://medium.com/ci-t/how-to-pass-the-google-professional-cloud-security-engineer-certification-74160bf4d205
https://www.linkedin.com/pulse/google-cloud-professional-security-engineer-exam-study-mark-johnson/
1. Review the exam guide (https://cloud.google.com/certification/guides/cloud-security-engineer)

1. Configuring access within a cloud solution environment
1.1 Configuring Cloud Identity
1.2 Managing user accounts
1.3 Managing service accounts
1.4 Managing authentication
1.5 Managing and implementing authorization controls
1.6 Defining resource hierarchy
2. Configuring network security
2.1 Designing network security
2.2 Configuring network segmentation
2.3 Establish private connectivity
3. Ensuring data protection
3.1 Preventing data loss with the DLP API
3.2 Managing encryption at rest
4. Managing operations within a cloud solution environment
4.1 Building and deploying infrastructure
4.2 Building and deploying applications
4.3 Monitoring for security events
5. Ensuring compliance
5.1 Comprehension of regulatory concerns
5.2 Comprehension of compute environment concerns

2. Training: Security in Google Cloud Platform
https://www.coursera.org/specializations/security-google-cloud-platform 
https://github.com/Jymit/CheatSheet/issues/10#issuecomment-628061139 - Complete

3. Hands-on practice
Google Cloud Platform Free Tier
Security & Identity Fundamentals (https://github.com/Jymit/CheatSheet/issues/10#issuecomment-626743499) - Complete

4. Practice exam
https://cloud.google.com/certification/practice-exam/cloud-security-engineer

2 hours $200 
Multiple choice (70% of 50 = 35)