Closed Jymit closed 4 years ago
=>
=> Google Cloud Essentials
=>
=> GSP282 A Tour of Qwiklabs and the Google Cloud Platform
Compute: houses a variety of machine types that support any type of workload. The different computing options let you decide how involved you want to be with operational details and infrastructure amongst other things.
Storage: data storage and database options for structured or unstructured, relational or non relational data.
Networking: services that balance application traffic and provision security rules amongst other things.
Stackdriver: a suite of cross-cloud logging, monitoring, trace, and other service reliability tools.
Tools: services for developers managing deployments and application build pipelines.
Big Data: services that allow you to process and analyze large datasets.
Artificial Intelligence: a suite of APIs that run specific artificial intelligence and machine learning tasks on the Google Cloud platform.
=> GSP001 Creating a Virtual Machine
gcloud auth list
gcloud config list project
GCE > new vm > n1-standard-2
gcloud compute instances create gcelab1 --machine-type n1-standard-2 --zone us-central1-c
apt-get update
apt-get install nginx -y
ps auwx | grep nginx
gcloud compute instances create gcelab2 --machine-type n1-standard-2 --zone us-central1-c
gcloud compute ssh gcelab2 --zone us-central1-c
=> GSP093 Compute Engine: Qwik Start - Windows
Compute Engine > VM instances.
Windows Server 2012 R2 Datacenter, all other settings at their defaults.
gcloud config list project
gcloud compute instances get-serial-port-output instance-1 --zone us-central1-a
RDP into the Windows Server
=> GSP002 Getting Started with Cloud Shell & gcloud
Start Cloud Shell
Understanding Regions and Zones
Initializing Cloud SDK
The gcloud CLI is a part of the Google Cloud SDK. You need to download and install the SDK on your own system and initialize it (by running gcloud init) before you can use the gcloud command-line tool.
The gcloud CLI is automatically available in Cloud Shell. Since you're using Cloud Shell for this lab, you don't need to install gcloud manually.
HERE FOR REGIONS AND ZONES DOC:
https://cloud.google.com/compute/docs/regions-zones
europe-west2 is London, with zones a,b,c
Setting environment variables
export PROJECT_ID=qwiklabs-gcp-01-29c2063fd69a
export ZONE=europe-west2-a
echo $PROJECT_ID, $ZONE
Create a virtual machine with gcloud
gcloud compute instances create gcelab2 --machine-type n1-standard-2 --zone $ZONE
Using gcloud commands
gcloud -h
gcloud config --help
gcloud help config
gcloud config list
gcloud config list --all
gcloud components list
Auto-completion
gcloud components install beta
gcloud beta interactive
gcloud compute instances describe gcelab2
SSH into your vm instance
gcloud compute ssh gcelab2 --zone $ZONE
exit
Use the Home directory
cd $HOME
vi ./.bashrc
:wq
=> GSP100 Kubernetes Engine: Qwik Start
gcloud auth list
gcloud config list project
gcloud config set compute/zone us-central1-a
gcloud container clusters create moshimoshi
gcloud container clusters get-credentials moshimoshi
kubectl create deployment hello-server --image=gcr.io/google-samples/hello-app:1.0
kubectl expose deployment hello-server --type=LoadBalancer --port 8080
kubectl get service
http://104.198.66.36:8080/
gcloud container clusters delete moshimoshi
=> GSP007 Set Up Network and HTTP Load Balancers
gcloud auth list
gcloud config list project
gcloud config set compute/region europe-west2
gcloud config set compute/zone europe-west2-a
cat << EOF > startup.sh
#! /bin/bash
apt-get update
apt-get install -y nginx
service nginx start
sed -i -- 's/nginx/Google Cloud Platform - '"\$HOSTNAME"'/' /var/www/html/index.nginx-debian.html
EOF
gcloud compute instance-templates create nginx-template \
--metadata-from-file startup-script=startup.sh
gcloud compute target-pools create nginx-pool
gcloud compute instance-groups managed create nginx-group \
--base-instance-name nginx \
--size 2 \
--template nginx-template \
--target-pool nginx-pool
gcloud compute instances list
gcloud compute firewall-rules create www-firewall --allow tcp:80
gcloud compute instances list
Now get on via a browser to the external IPs
Create a Network Load Balancer
Create an L3 network load balancer targeting your instance group:
gcloud compute forwarding-rules create nginx-lb \
--region europe-west2 \
--ports=80 \
--target-pool nginx-pool
gcloud compute forwarding-rules list
Create a HTTP(s) Load Balancer
Health checks verify that the instance is responding to HTTP or HTTPS traffic:
gcloud compute http-health-checks create http-basic-check
Define an HTTP service and map a port name to the relevant port for the instance group. Now the load balancing service can forward traffic to the named port:
gcloud compute instance-groups managed \
set-named-ports nginx-group \
--named-ports http:80
Create a backend service:
gcloud compute backend-services create nginx-backend \
--protocol HTTP --http-health-checks http-basic-check --global
Add the instance group into the backend service:
gcloud compute backend-services add-backend nginx-backend \
--instance-group nginx-group \
--instance-group-zone europe-west2-a \
--global
Create a default URL map that directs all incoming requests to all your instances:
gcloud compute url-maps create web-map \
--default-service nginx-backend
Create a target HTTP proxy to route requests to your URL map:
gcloud compute target-http-proxies create http-lb-proxy \
--url-map web-map
Create a global forwarding rule to handle and route incoming requests. A forwarding rule sends traffic to a specific target HTTP or HTTPS proxy depending on the IP address, IP protocol, and port specified. The global forwarding rule does not support multiple ports:
gcloud compute forwarding-rules create http-content-rule \
--global \
--target-http-proxy http-lb-proxy \
--ports 80
gcloud compute forwarding-rules list
Take note of the http-content-rule IP_ADDRESS for the forwarding rule, browse to it.
=> GSP313 Google Cloud Essentials: Challenge Lab
Task 1: Create a project jumphost instance
Task 2: Create a Kubernetes service cluster
Task 3: Setup an HTTP load balancer
=>
=> Security & Identity Fundamentals
=>
=> GSP064 Cloud IAM: Qwik Start
Sign in to GCP Console as the first user
Sign in to GCP Console as the second user
Create a bucket, add a txt file, rename it
gsutil mv -p gs://james4n6_memes/sample.txt gs://james4n6_memes/sample.txt
delete user 2 role in IAM, check
gsutil ls gs://[YOUR_BUCKET_NAME]
gs://[YOUR_BUCKET_NAME]/sample.txt
Give user 2 Storage > Storage Object Viewer role
=> GSP190 IAM Custom Roles
Activate Cloud Shell
gcloud auth list
gcloud config list project
In the Cloud IAM world, permissions are represented in the form:
<service>.<resource>.<verb>
DEVSHELL_PROJECT_ID=qwiklabs-gcp-04-2fdbed2742ad
gcloud iam list-testable-permissions //cloudresourcemanager.googleapis.com/projects/$DEVSHELL_PROJECT_ID
gcloud iam roles describe [ROLE_NAME]
gcloud iam list-grantable-roles
gcloud iam roles create
Create your role definition YAML file
vi role-definition.yaml
title: "Role Editor"
description: "Edit access for App Versions"
stage: "ALPHA"
includedPermissions:
- appengine.versions.create
- appengine.versions.delete
gcloud iam roles create editor --project $DEVSHELL_PROJECT_ID \
--file role-definition.yaml
Create a custom role using flags
gcloud iam roles create viewer --project $DEVSHELL_PROJECT_ID \
--title "Role Viewer" --description "Custom role description." \
--permissions compute.instances.get,compute.instances.list --stage ALPHA
gcloud iam roles list --project $DEVSHELL_PROJECT_ID
gcloud iam roles list
gcloud iam roles describe Role Editor --project $DEVSHELL_PROJECT_ID
vi new-role-definition.yaml
Update with file:
gcloud iam roles update editor --project $DEVSHELL_PROJECT_ID \
--file new-role-definition.yaml
Update with flag:
gcloud iam roles update viewer --project $DEVSHELL_PROJECT_ID \
--add-permissions storage.buckets.get,storage.buckets.list
disable:
gcloud iam roles update viewer --project $DEVSHELL_PROJECT_ID \
--stage DISABLED
delete:
gcloud iam roles delete viewer --project $DEVSHELL_PROJECT_ID
undelete:
gcloud iam roles undelete viewer --project $DEVSHELL_PROJECT_ID
=> GSP199 Service Accounts and Roles: Fundamentals
A service account is a special Google account that belongs to your application or a virtual machine (VM) instead of an individual end user. Your application uses the service account to call the Google API of a service, so that the users aren't directly involved.
gcloud auth list
gcloud config list project
DEVSHELL_PROJECT_ID=qwiklabs-gcp-00-2dd8617a8e10
User-managed service accounts:
PROJECT_NUMBER-compute@developer.gserviceaccount.com
Google-managed service accounts:
PROJECT_NUMBER@cloudservices.gserviceaccount.com
Creating and Managing Service Accounts
Creating a service account:
gcloud iam service-accounts create my-sa-123 --display-name "my service account"
gcloud projects add-iam-policy-binding $DEVSHELL_PROJECT_ID \
--member serviceAccount:my-sa-123@$DEVSHELL_PROJECT_ID.iam.gserviceaccount.com --role roles/editor
IAM & Admin > IAM > Service accounts and click on + Create Service Account:
Service account name: bigquery-qwiklab
Role: BigQuery Data Viewer and BigQuery User
Compute Engine > VM Instances > Create:
Name bigquery-instance
Region us-central1(Iowa)
Zone us-central1-a
Machine Type 1 vCPU (n1-standard-1)
Boot Disk Debian GNU/Linux 9 (stretch)
Service account bigquery-qwiklab
SSH:
sudo apt-get update
sudo apt-get install virtualenv -y
virtualenv -p python3 venv
source venv/bin/activate
sudo apt-get install -y git python3-pip
pip install google-cloud-bigquery
pip install pandas
echo "
from google.auth import compute_engine
from google.cloud import bigquery
credentials = compute_engine.Credentials(
service_account_email='YOUR_SERVICE_ACCOUNT')
query = '''
SELECT
year,
COUNT(1) as num_babies
FROM
publicdata.samples.natality
WHERE
year > 2000
GROUP BY
year
'''
client = bigquery.Client(
project='YOUR_PROJECT_ID',
credentials=credentials)
print(client.query(query).to_dataframe())
" > query.py
sed -i -e "s/YOUR_PROJECT_ID/$(gcloud config get-value project)/g" query.py
cat query.py
sed -i -e "s/YOUR_SERVICE_ACCOUNT/bigquery-qwiklab@$(gcloud config get-value project).iam.gserviceaccount.com/g" query.py
cat query.py
python query.py
=> GSP698 Securing Google Cloud with CFT Scorecard
Setting up CFT Scorecard
Running a CFT Scorecard assessment
Adding new CFT Scorecard policy
CAI stands for Cloud Asset Inventory
gcloud auth list
gcloud config list project
export GOOGLE_PROJECT=$DEVSHELL_PROJECT_ID
export CAI_BUCKET_NAME=cai-$GOOGLE_PROJECT
CFT scorecard has two dependencies:
Cloud Asset Inventory
Policy Library Proceed to enable Cloud Asset API in your project:
gcloud services enable cloudasset.googleapis.com \
--project $GOOGLE_PROJECT
git clone https://github.com/forseti-security/policy-library.git
cp policy-library/samples/storage_blacklist_public.yaml policy-library/policies/constraints/
gsutil mb -l us-central1 -p $GOOGLE_PROJECT gs://$CAI_BUCKET_NAME
Collect the data using Cloud Asset Inventory (CAI)
# Export resource data
gcloud asset export \
--output-path=gs://$CAI_BUCKET_NAME/resource_inventory.json \
--content-type=resource \
--project=$GOOGLE_PROJECT
# Export IAM data
gcloud asset export \
--output-path=gs://$CAI_BUCKET_NAME/iam_inventory.json \
--content-type=iam-policy \
--project=$GOOGLE_PROJECT
Analyze the CAI data with CFT Scorecard
curl -o cft https://storage.googleapis.com/cft-cli/latest/cft-linux-amd64
# make executable
chmod +x cft
./cft scorecard --policy-path=policy-library/ --bucket=$CAI_BUCKET_NAME
Adding more constraints to CFT Scorecard
# Add a new policy to blacklist the IAM Owner Role
cat > policy-library/policies/constraints/iam_whitelist_owner.yaml << EOF
apiVersion: constraints.gatekeeper.sh/v1alpha1
kind: GCPIAMAllowedBindingsConstraintV1
metadata:
name: whitelist_owner
annotations:
description: List any users granted Owner
spec:
severity: high
match:
target: ["organization/*"]
exclude: []
parameters:
mode: whitelist
assetType: cloudresourcemanager.googleapis.com/Project
role: roles/owner
members:
- "serviceAccount:admiral@qwiklabs-services-prod.iam.gserviceaccount.com"
EOF
./cft scorecard --policy-path=policy-library/ --bucket=$CAI_BUCKET_NAME
export USER_ACCOUNT="$(gcloud config get-value core/account)"
export PROJECT_NUMBER=$(gcloud projects describe $GOOGLE_PROJECT --format="get(projectNumber)")
# Add a new policy to whitelist the IAM Editor Role
cat > policy-library/policies/constraints/iam_identify_outside_editors.yaml << EOF
apiVersion: constraints.gatekeeper.sh/v1alpha1
kind: GCPIAMAllowedBindingsConstraintV1
metadata:
name: identify_outside_editors
annotations:
description: list any users outside the organization granted Editor
spec:
severity: high
match:
target: ["organization/*"]
exclude: []
parameters:
mode: whitelist
assetType: cloudresourcemanager.googleapis.com/Project
role: roles/editor
members:
- "user:$USER_ACCOUNT"
- "serviceAccount:*$PROJECT_NUMBER*gserviceaccount.com"
- "serviceAccount:$GOOGLE_PROJECT*gserviceaccount.com"
EOF
./cft scorecard --policy-path=policy-library/ --bucket=$CAI_BUCKET_NAME
=> GSP193 VPC Network Peering
Google Cloud Platform (GCP) Virtual Private Cloud (VPC) Network Peering allows private connectivity across two VPC networks regardless of whether or not they belong to the same project or the same organization.
VPC Network Peering allows you to build SaaS (Software-as-a-Service) ecosystems in GCP, making services available privately across different VPC networks within and across organizations, allowing workloads to communicate in private space.
VPC Network Peering is useful for:
Organizations with several network administrative domains.
Organizations that want to peer with other organizations.
VPC Network Peering gives you several advantages over using external IP addresses or VPNs to connect networks, including:
Network Latency: Private networking offeres lower latency than public IP networking.
Network Security: Service owners do not need to have their services exposed to the public Internet and deal with its associated risks.
Network Cost: Networks that are peered can use internal IPs to communicate and save GCP egress bandwidth costs. Regular network pricing still applies to all traffic.
gcloud auth list
gcloud config list project
gcloud config set project <PROJECT_ID2>
Project-A, first cloud shell and create a custom network:
gcloud compute networks create network-a --subnet-mode custom
subnet:
gcloud compute networks subnets create network-a-central --network network-a \
--range 10.0.0.0/16 --region us-central1
VM:
gcloud compute instances create vm-a --zone us-central1-a --network network-a --subnet network-a-central
enable ssh and icmp:
gcloud compute firewall-rules create network-a-fw --network network-a --allow tcp:22,icmp
Project-B, first cloud shell and create a custom network:
gcloud compute networks create network-b --subnet-mode custom
gcloud compute networks subnets create network-b-central --network network-b \
--range 10.8.0.0/16 --region us-central1
gcloud compute instances create vm-b --zone us-central1-a --network network-b --subnet network-b-central
gcloud compute firewall-rules create network-b-fw --network network-b --allow tcp:22,icmp
VPC Network > VPC network peering > Create connection >
Name, Network, Peering VPC network, project ID, VPN network name, create.
VPC Network Peering becomes ACTIVE and routes are exchanged As soon as the peering moves to an ACTIVE state, traffic flows are set up.
gcloud compute routes list --project qwiklabs-gcp-04-71623b00f5a9
Navigation Menu > Compute Engine > VM instances > Copy the INTERNAL_IP for vm-a.
open project B, vm-b and ssh into it,
ping -c 5
=> GSP499 User Authentication: Identity-Aware Proxy
Identity-Aware Proxy (IAP) is a Google Cloud Platform service that intercepts web requests sent to your application, authenticates the user making the request using the Google Identity Service, and only lets the requests through if they come from a user you authorize. In addition, it can modify the request headers to include information about the authenticated user.
gcloud auth list
gcloud config list project
git clone https://github.com/googlecodelabs/user-authentication-with-iap.git
cd user-authentication-with-iap
cd 1-HelloWorld
cat main.py
gcloud app deploy
[7] europe-west (supports standard and flexible)
gcloud app browse
Restrict Access with IAP
Security > IAP > CONFIGURE CONSENT SCREEN
https://qwiklabs-gcp-00-92ba674fc1e8.ew.r.appspot.com/
/_gcp_iap/clear_login_cookie
cd app2
gcloud app deploy
gcloud app browse
curl -X GET <your-url-here> -H "X-Goog-Authenticated-User-Email: totally fake email"
IAP on and off
JWT
gcloud app deploy
gcloud app browse
JSON Web Token (JWT) is a compact URL-safe means of representing claims to be transferred between two parties.
=> GSP079 Getting Started with Cloud KMS
gcloud auth list
gcloud config list project
Create your own Cloud Storage bucket.
BUCKET_NAME=james_enron_corpus
gsutil mb gs://${BUCKET_NAME}
gsutil cp gs://enron_emails/allen-p/inbox/1. .
tail 1.
gcloud services enable cloudkms.googleapis.com
In order to encrypt the data, you need to create a KeyRing and a CryptoKey. KeyRings are useful for grouping keys. Keys can be grouped by environment (like test, staging, and prod) or by some other conceptual grouping. For this lab, your KeyRing will be called test and your CryptoKey will be called qwiklab.
KEYRING_NAME=test CRYPTOKEY_NAME=qwiklab
gcloud kms keyrings create $KEYRING_NAME --location global
gcloud kms keys create $CRYPTOKEY_NAME --location global \
--keyring $KEYRING_NAME \
--purpose encryption
Navigation menu > IAM & Admin > Cryptogrphic keys > Go to Key Management:
Encrypt data:
PLAINTEXT=$(cat 1. | base64 -w0)
curl -v "https://cloudkms.googleapis.com/v1/projects/$DEVSHELL_PROJECT_ID/locations/global/keyRings/$KEYRING_NAME/cryptoKeys/$CRYPTOKEY_NAME:encrypt" \
-d "{\"plaintext\":\"$PLAINTEXT\"}" \
-H "Authorization:Bearer $(gcloud auth application-default print-access-token)"\
-H "Content-Type: application/json"
curl -v "https://cloudkms.googleapis.com/v1/projects/$DEVSHELL_PROJECT_ID/locations/global/keyRings/$KEYRING_NAME/cryptoKeys/$CRYPTOKEY_NAME:encrypt" \
-d "{\"plaintext\":\"$PLAINTEXT\"}" \
-H "Authorization:Bearer $(gcloud auth application-default print-access-token)"\
-H "Content-Type:application/json" \
| jq .ciphertext -r > 1.encrypted
curl -v "https://cloudkms.googleapis.com/v1/projects/$DEVSHELL_PROJECT_ID/locations/global/keyRings/$KEYRING_NAME/cryptoKeys/$CRYPTOKEY_NAME:decrypt" \
-d "{\"ciphertext\":\"$(cat 1.encrypted)\"}" \
-H "Authorization:Bearer $(gcloud auth application-default print-access-token)"\
-H "Content-Type:application/json" \
| jq .plaintext -r | base64 -d
gsutil cp 1.encrypted gs://${BUCKET_NAME}
Configure IAM Permissions
USER_EMAIL=$(gcloud auth list --limit=1 2>/dev/null | grep '@' | awk '{print $2}')
assign that user the ability to manage KMS resources.:
gcloud kms keyrings add-iam-policy-binding $KEYRING_NAME \
--location global \
--member user:$USER_EMAIL \
--role roles/cloudkms.admin
Without the cloudkms.cryptoKeyEncrypterDecrypter permission, the authorized user will not be able to use the keys to encrypt or decrypt data:
gcloud kms keyrings add-iam-policy-binding $KEYRING_NAME \
--location global \
--member user:$USER_EMAIL \
--role roles/cloudkms.cryptoKeyEncrypterDecrypter
Back up data on the Command Line:
gsutil -m cp -r gs://enron_emails/allen-p .
MYDIR=allen-p
FILES=$(find $MYDIR -type f -not -name "*.encrypted")
for file in $FILES; do
PLAINTEXT=$(cat $file | base64 -w0)
curl -v "https://cloudkms.googleapis.com/v1/projects/$DEVSHELL_PROJECT_ID/locations/global/keyRings/$KEYRING_NAME/cryptoKeys/$CRYPTOKEY_NAME:encrypt" \
-d "{\"plaintext\":\"$PLAINTEXT\"}" \
-H "Authorization:Bearer $(gcloud auth application-default print-access-token)" \
-H "Content-Type:application/json" \
| jq .ciphertext -r > $file.encrypted
done
gsutil -m cp allen-p/inbox/*.encrypted gs://${BUCKET_NAME}/allen-p/inbox
View Cloud Audit Logs
=> GSP178 Setting up a Private Kubernetes Cluster
gcloud auth list
gcloud config list project
gcloud config set compute/zone us-central1-a
gcloud beta container clusters create private-cluster \
--enable-private-nodes \
--master-ipv4-cidr 172.16.0.16/28 \
--enable-ip-alias \
--create-subnetwork ""
gcloud compute networks subnets list --network default
gke-private-cluster-subnet-fbd51ea9
gcloud compute networks subnets describe gke-private-cluster-subnet-fbd51ea9 --region us-central1
gcloud compute instances create source-instance --zone us-central1-a --scopes 'https://www.googleapis.com/auth/cloud-platform'
35.188.97.197
Get the <External_IP> of the source-instance with:
gcloud compute instances describe source-instance --zone us-central1-a | grep natIP
gcloud container clusters update private-cluster \
--enable-master-authorized-networks \
--master-authorized-networks 35.188.97.197/32
gcloud compute ssh source-instance --zone us-central1-a
gcloud components install kubectl
gcloud container clusters get-credentials private-cluster --zone us-central1-a
kubectl get nodes --output yaml | grep -A4 addresses
kubectl get nodes --output wide
exit
gcloud container clusters delete private-cluster --zone us-central1-a
Creating a private cluster that uses a custom subnetwork:
gcloud compute networks subnets create my-subnet \
--network default \
--range 10.0.4.0/22 \
--enable-private-ip-google-access \
--region us-central1 \
--secondary-range my-svc-range=10.0.32.0/20,my-pod-range=10.4.0.0/14
Create a private cluster that uses your subnetwork:
gcloud beta container clusters create private-cluster2 \
--enable-private-nodes \
--enable-ip-alias \
--master-ipv4-cidr 172.16.0.32/28 \
--subnetwork my-subnet \
--services-secondary-range-name my-svc-range \
--cluster-secondary-range-name my-pod-range
Authorize your external address range, replacing [MY_EXTERNAL_RANGE] with the CIDR range of the external addresses from the previous output:
gcloud container clusters update private-cluster2 \
--enable-master-authorized-networks \
--master-authorized-networks [MY_EXTERNAL_RANGE]
gcloud compute ssh source-instance --zone us-central1-a
gcloud container clusters get-credentials private-cluster2 --zone us-central1-a
kubectl get nodes --output yaml | grep -A4 addresses
=>
=> Kubernetes in Google Cloud
=>
=> GSP055 Introduction to Docker
gcloud auth list
gcloud config list project
docker run hello-world
docker images
docker run hello-world
docker ps
docker ps -a
mkdir test && cd test
cat > Dockerfile <<EOF
# Use an official Node runtime as the parent image
FROM node:6
# Set the working directory in the container to /app
WORKDIR /app
# Copy the current directory contents into the container at /app
ADD . /app
# Make the container's port 80 available to the outside world
EXPOSE 80
# Run app.js using node when the container launches
CMD ["node", "app.js"]
EOF
cat > app.js <<EOF
const http = require('http');
const hostname = '0.0.0.0';
const port = 80;
const server = http.createServer((req, res) => {
res.statusCode = 200;
res.setHeader('Content-Type', 'text/plain');
res.end('Hello World\n');
});
server.listen(port, hostname, () => {
console.log('Server running at http://%s:%s/', hostname, port);
});
process.on('SIGINT', function() {
console.log('Caught interrupt signal and will exit');
process.exit();
});
EOF
docker build -t node-app:0.1 .
docker images
docker run -p 4000:80 --name my-app node-app:0.1
curl http://localhost:4000
docker stop my-app && docker rm my-app
docker run -p 4000:80 --name my-app -d node-app:0.1
docker ps
docker logs [container_id]
cd test
docker build -t node-app:0.2 .
docker run -p 8080:80 --name my-app-2 -d node-app:0.2
docker ps
curl http://localhost:8080
curl http://localhost:4000
docker logs -f [container_id]
docker exec -it [container_id] bash
ls
exit
docker inspect --format='{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' [container_id]
gcloud config list project
docker tag node-app:0.2 gcr.io/[project-id]/node-app:0.2
docker images
docker push gcr.io/[project-id]/node-app:0.2
(GCR is Google Cloud Platform's private Docker image registry offering. It works with Google Container Engine clusters and Google Compute Engine instances out-of-the box without setting up any authentication)
docker stop $(docker ps -q)
docker rm $(docker ps -aq)
docker rmi node-app:0.2 gcr.io/[project-id]/node-app node-app:0.1
docker rmi node:6
docker rmi $(docker images -aq) # remove remaining images
docker images
docker pull gcr.io/[project-id]/node-app:0.2
docker run -p 4000:80 -d gcr.io/[project-id]/node-app:0.2
curl http://localhost:4000
=> GSP100 Kubernetes Engine: Qwik Start
gcloud auth list
gcloud config list project
gcloud config set compute/zone us-central1-a
gcloud container clusters create moshimoshi
gcloud container clusters get-credentials moshimoshi
kubectl create deployment hello-server --image=gcr.io/google-samples/hello-app:1.0
kubectl expose deployment hello-server --type=LoadBalancer --port 8080
kubectl get service
http://104.198.66.36:8080/
gcloud container clusters delete moshimoshi
=> GSP021 Orchestrating the Cloud with Kubernetes
gcloud config list project
GKE:
gcloud config set compute/zone us-central1-b
gcloud container clusters create io
sample code:
git clone https://github.com/googlecodelabs/orchestrate-with-kubernetes.git
cd orchestrate-with-kubernetes/kubernetes
ls
quick demo:
kubectl create deployment nginx --image=nginx:1.10.0
kubectl get pods
kubectl expose deployment nginx --port 80 --type LoadBalancer
kubectl get services
curl http://<External IP>:80
Pods:
cat pods/monolith.yaml
kubectl create -f pods/monolith.yaml
kubectl get pods
kubectl describe pods monolith
In the 2nd terminal, run this command to set up port-forwarding:
kubectl port-forward monolith 10080:80
curl http://127.0.0.1:10080
curl http://127.0.0.1:10080/secure
curl -u user http://127.0.0.1:10080/login
TOKEN=$(curl http://127.0.0.1:10080/login -u user|jq -r '.token')
curl -H "Authorization: Bearer $TOKEN" http://127.0.0.1:10080/secure
kubectl logs monolith
Open a 3rd terminal and use the -f flag to get a stream of the logs happening in real-time:
kubectl logs -f monolith
curl http://127.0.0.1:10080
kubectl exec monolith --stdin --tty -c monolith /bin/sh
ping -c 3 google.com
exit
Services:
cd ~/orchestrate-with-kubernetes/kubernetes
cat pods/secure-monolith.yaml
kubectl create secret generic tls-certs --from-file tls/
kubectl create configmap nginx-proxy-conf --from-file nginx/proxy.conf
kubectl create -f pods/secure-monolith.yaml
cat services/monolith.yaml
kubectl create -f services/monolith.yaml
gcloud compute firewall-rules create allow-monolith-nodeport \
--allow=tcp:31000
gcloud compute instances list
curl -k https://<EXTERNAL_IP>:31000
Add labels to pods:
kubectl get pods -l "app=monolith"
kubectl get pods -l "app=monolith,secure=enabled"
kubectl label pods secure-monolith 'secure=enabled'
kubectl get pods secure-monolith --show-labels
kubectl describe services monolith | grep Endpoints
gcloud compute instances list
curl -k https://<EXTERNAL_IP>:31000
Creating deployment:
cat deployments/auth.yaml
kubectl create -f deployments/auth.yaml
kubectl create -f services/auth.yaml
kubectl create -f deployments/hello.yaml
kubectl create -f services/hello.yaml
kubectl create configmap nginx-frontend-conf --from-file=nginx/frontend.conf
kubectl create -f deployments/frontend.yaml
kubectl create -f services/frontend.yaml
kubectl get services frontend
curl -k https://<EXTERNAL-IP>
=> GSP053 Managing Deployments Using Kubernetes Engine
=> GSP051 Continuous Delivery with Jenkins in Kubernetes Engine
=> GSP318 Kubernetes in Google Cloud: Challenge Lab
Creating Docker images on a host.
Running Docker containers on a host.
Storing Docker images in the Google Container Repository (GCR).
Deploying GCR images on Kubernetes.
Pushing updates onto Kubernetes.
Automating deployments to Kubernetes using Jenkins.
=>
=> Security in Google Cloud Platform Specialization
=>
=> 1 Google Cloud Platform Fundamentals: Core Infrastructure
- GCP, GCE, GCS, GKE, GAE, BQ, ML
=> 2 Managing Security in Google Cloud Platform
- Cloud ID, IAM, VPC sec
=> 3 Security Best Practices in Google Cloud
- Service accounts, IAM roles, GCE best practices
=> 4 Mitigating Security Vulnerabilities on Google Cloud Platform
Protecting against Distributed Denial of Service Attacks
- Cloud Armor (App, DDOS defense), DLP API, Security Command Center, Stackdriver Monitoring and Logging, Forseti
=>
=> Professional Cloud Security Engineer
=>
https://cloud.google.com/certification/guides/cloud-security-engineer
https://cloud.google.com/certification/practice-exam/cloud-security-engineer
https://medium.com/ci-t/how-to-pass-the-google-professional-cloud-security-engineer-certification-74160bf4d205
https://www.linkedin.com/pulse/google-cloud-professional-security-engineer-exam-study-mark-johnson/
1. Review the exam guide (https://cloud.google.com/certification/guides/cloud-security-engineer)
1. Configuring access within a cloud solution environment
1.1 Configuring Cloud Identity
1.2 Managing user accounts
1.3 Managing service accounts
1.4 Managing authentication
1.5 Managing and implementing authorization controls
1.6 Defining resource hierarchy
2. Configuring network security
2.1 Designing network security
2.2 Configuring network segmentation
2.3 Establish private connectivity
3. Ensuring data protection
3.1 Preventing data loss with the DLP API
3.2 Managing encryption at rest
4. Managing operations within a cloud solution environment
4.1 Building and deploying infrastructure
4.2 Building and deploying applications
4.3 Monitoring for security events
5. Ensuring compliance
5.1 Comprehension of regulatory concerns
5.2 Comprehension of compute environment concerns
2. Training: Security in Google Cloud Platform
https://www.coursera.org/specializations/security-google-cloud-platform
https://github.com/Jymit/CheatSheet/issues/10#issuecomment-628061139 - Complete
3. Hands-on practice
Google Cloud Platform Free Tier
Security & Identity Fundamentals (https://github.com/Jymit/CheatSheet/issues/10#issuecomment-626743499) - Complete
4. Practice exam
https://cloud.google.com/certification/practice-exam/cloud-security-engineer
2 hours $200
Multiple choice (70% of 50 = 35)
osdfir.blogspot.com
Google training
.