The Sticker App allows end users to browse images from Flickr, add them to a cart, and print them as laptop stickers. The app also shows trending stickers which reflects the most popular stickers viewed\printed by users and allows users to provide feedback on the app after they have selected to print their stickers.
The Sticker App is composed of 4 different microservices, most of which are implemented in Node.JS, with the exception of one that is implemented using ASP .NET Core.
In addition to the technologies mentioned above, this app also uses:
You have 3 options for deploying this app:
Deploy locally using Docker Compose; each microservice and their required storage resources (e.g. MongoDB, MySQL, and Kafka) will run in its own Docker container on your local developer machine.
Deploy the app to Azure Container Services using Kubernetes and Helm. Specifically:
Deploy production and test versions of the app using Jenkins' CI\CD Pipeline. Specifically:
(Optional) Configure App Insights by adding an App Insights resource. To do this, follow the Set up an App Insights resource section that describes how to create this resource and how to retrieve the Instrumention Key. Finally, update AI_IKEY setting in the apigateway\debug.env file. If you choose not to configure this, the app will still function, but there won't be any diagnostic logging collected in Azure.
(Optional) Configure AAD which provides the ability for the end user to login and complete the sticker checkout process. If you choose NOT to configure this, the app will only be partially functional - the end user will be unable to complete the sticker checkout process when they are using the app. However, the app will launch fine and still provide the ability to browse and add\view stickers in the cart.
Follow these steps to configure AAD: a. Refer to the below section, called AAD Setup, to create the required AAD resources and configure the app for email and facebook authentication. b. In the Azure Portal for the B2C Tenant that you created in the above step, update the Application's Reply URL to: http://localhost:3000/users/auth/return. c. Set the following values in the apigateway\debug.env file (these are retrieved via the Azure Portal) - specifically, click on the B2C Tenant. Once this opens, click on the Azure AD B2C Settings square on the main section of the page which will open detailed settings:
As a result, the end user should now be able to click 'Log In' to sign in\up using email or facebook. The user should also be able to add stickers to the cart and checkout. Finally, the user can 'Log Out' of the app.
$ docker-compose -f docker-compose.dev.yml up -d
Then open your browser to http://localhost:3000
This repository includes a chart for Helm, the package manager for Kubernetes, to simplify deploying the application.
Ensure kubectl
is configured for your cluster, and that the Helm
client is on your path. See the Helm quickstart guide
for instructions.
Build the app's React client. From the repository root:
$ docker-compose -f docker-compose.build-client.yml up
The chart expects these
images, where your-registry.azurecr.io
is your private registry:
image name | source directory |
---|---|
your-registry.azurecr.io/stickerapp/apigateway:1.0 |
apigateway |
your-registry.azurecr.io/stickerapp/checkout:1.0 |
checkoutService |
your-registry.azurecr.io/stickerapp/session:1.0 |
sessionService |
your-registry.azurecr.io/stickerapp/stickers:1.0 |
stickerService |
The chart's imageTag
value is used for these images. It defaults to 1.0
and
can be overridden. You can build and push the images from the command line, e.g.
for the sticker service:
$ docker login your-registry.azurecr.io -u adminName -p password
$ cd stickerService
$ docker build -t your-registry.azurecr.io/stickerapp/stickers:1.0 .
$ docker push your-registry.azurecr.io/stickerapp/stickers:1.0
The user name and password for your registry can be found by going to the Azure Portal and selecting the Access keys blade for your registry.
Install Tiller, Helm's server-side component:
$ helm init
Deploy an ingress controller, if your cluster doesn't have one already:
$ helm install -f nginx-ingress-values.yaml --namespace kube-system stable/nginx-ingress
nginx-ingress-values.yaml
, in this repository's k8s
directory, contains
settings which override the nginx-ingress
chart's defaults to disable SSL
redirecting and use a more recent controller imageThe app requires a Kafka cluster. You can deploy a small one with Helm:
$ helm repo add incubator http://storage.googleapis.com/kubernetes-charts-incubator
$ helm install -n kafka --set Replicas=1 --set zookeeper.Servers=1 --set zookeeper.Storage="1Gi" incubator/kafka
Open a shell in the k8s
directory.
Generate the docker-registry secret Kubernetes will use to pull the app's images. The included script can do this:
$ node generate-dockercfg.js
The app is reachable through the ingress controller's external IP address. To find this, inspect the ingress controller's service in the Kubernetes UI,
or use kubectl
- this external IP address will be needed for configuring AAD in the next step. For example, for an ingress controller deployed as described above:
$ kubectl get svc -l app=nginx-ingress --namespace kube-system
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
awesome-narwhal-nginx-ingress-controller 10.0.190.16 52.173.17.217 80:32493/TCP,443:31437/TCP 40m
Additional steps are required to configure AAD which provides the ability for the end user to login and complete the sticker checkout process. If you choose NOT to configure this, the end user will be unable to complete the sticker checkout process when they are using the app, but the app will launch fine and provide the ability to browse and add\view stickers in the cart.
Refer to the below section called AAD Setup, to create the required AAD resources and configure the app for email and facebook authentication.
Set required values in values.yaml
(you can provide these on the command
line with --set
instead, if you don't mind a very long command line)
required value | description |
---|---|
azureActiveDirectory.clientId |
client ID for your Azure AD app |
azureActiveDirectory.clientSecret |
secret for your Azure AD app |
azureActiveDirectory.destroySessionUrl |
URL used to end AAD session |
azureActiveDirectory.redirectUrl |
post-login redirect URL |
azureActiveDirectory.tenant |
Azure AD tenant |
registry |
Docker registry, e.g. your-registry.azurecr.io |
dockercfg |
docker-registry secret |
kafkaBroker |
DNS name and port of a Kafka broker |
zookeeperConnect |
DNS name and port of a ZooKeeper instance |
IMPORTANT:
Collect the chart's dependencies:
$ helm dependency update stickerapp
Install the chart:
$ helm install stickerapp
NAME: honest-deer
...
You can inspect the deployment with the Kubernetes UI, helm
, or kubectl
:
$ helm status honest-deer
LAST DEPLOYED: Mon Jun 5 15:00:38 2017
NAMESPACE: default
STATUS: DEPLOYED
RESOURCES:
...
$ kubectl get all -l release=honest-deer
NAME READY STATUS RESTARTS AGE
po/honest-deer-apigateway-1160103434-9q3vr 1/1 Running 0 7m
po/honest-deer-checkout-545275974-f2rt5 1/1 Running 0 7m
po/honest-deer-session-1173111989-x7x37 1/1 Running 0 7m
...
Like any Kubernetes app, you can control this one with kubectl
. For
example, scaling the apigateway
deployment to add a second pod:
$ kubectl get deploy -l component=apigateway
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
honest-deer-apigateway 1 1 1 1 9m
$ kubectl scale --replicas=2 deploy/honest-deer-apigateway
deployment "honest-deer-apigateway" scaled
$ kubectl get deploy -l component=apigateway
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
honest-deer-apigateway 2 2 2 2 10m
The steps in this section describe how to use Jenkins to setup a CI pipeline for the Sticker App. Specifically, each time that the pipeline runs, the following steps will be performed:
A few additional points to note:
Ensure that these resources have been created (if they haven't already):
In the below steps, you will connect to the Jenkins VM using two different mechanisms: a.) SSH into the machine so that you can run CLI commands; this requires that you specify a public key when you configure the Jenkins VM so that you can connect to it from your local dev box via bash. This requires that there be an SSH public key that is stored in the ~/.ssh folder on your dev box. b.) Open the Jenkins dashboard via the browser so that you can setup the pipeline script; this requires that you either forward port 8080 to your local machine or to allow a remote connection to the Jenkins VM using port 8080.
A private Docker registry. Follow the Azure Container Registry getting started guide to create one.
A Kubernetes cluster.
To create the cluster, ensure that you run the following command (from a bash on your local dev box) so that the public key and user name are configured according to what the these instructions are assuming - this requires that you have a local SSH public key stored in the ~/.ssh folder on your dev box:
RESOURCE_GROUP=my-resource-group
DNS_PREFIX=some-unique-value
CLUSTER_NAME=any-acs-cluster-name
az acs create \
--orchestrator-type=kubernetes \
--resource-group $RESOURCE_GROUP \
--name=$CLUSTER_NAME \
--dns-prefix=$DNS_PREFIX \
--ssh-key-value ~/.ssh/id_rsa.pub \
--admin-username=azureuser \
--master-count=1 \
--agent-count=5 \
--agent-vm-size=Standard_D1_v2
Refer the Azure Container Service walkthrough for further details.
Deploy an ingress controller to your cluster:
$ helm install -f nginx-ingress-values.yaml --namespace kube-system stable/nginx-ingress
nginx-ingress-values.yaml
, in this repository's k8s
directory, contains
settings which override the nginx-ingress
chart's defaults to disable SSL
redirecting and use a more recent controller imageThe app requires a Kafka cluster. You can deploy a small one with Helm:
$ helm repo add incubator http://storage.googleapis.com/kubernetes-charts-incubator
$ helm install -n kafka --set Replicas=1 --set zookeeper.Servers=1 --set zookeeper.Storage="1Gi" incubator/kafka
Complete the following instructions once you have the Jenkins VM setup.
Install Docker and Kubectl
$ sudo chmod 777 /run/docker.sock
$ sudo docker info
Kubectl SSH into the Jenkins VM from your local dev machine and run the following commands:
curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl
chmod +x ./kubectl
sudo mv ./kubectl /usr/local/bin/kubectl
Next, from your local dev machine (not on the Jenkins VM), run the folowing commands to copy the 'kubectl' config file to the Jenkins machine so that Jenkins jobs have access to the Kubernetets cluster.
export KUBE_MASTER=<your_cluster_master_fqdn>
export JENKINS_USER=<your_jenkins_user>
export JENKINS_SERVER=<your_jenkins_public_ip>
sudo ssh $JENKINS_USER@$JENKINS_SERVER sudo mkdir -m 777 /home/$JENKINS_USER/.kube/ \
&& sudo ssh $JENKINS_USER@$JENKINS_SERVER sudo mkdir /var/lib/jenkins/.kube/ \
&& sudo scp -3 -i ~/.ssh/id_rsa azureuser@$KUBE_MASTER:.kube/config $JENKINS_USER@$JENKINS_SERVER:~/.kube/config \
&& sudo ssh -i ~/.ssh/id_rsa $JENKINS_USER@$JENKINS_SERVER sudo cp /home/$JENKINS_USER/.kube/config /var/lib/jenkins/.kube/config \
Helm SSH into the Jenkins VM from your local dev machine and follow these instructions: https://docs.helm.sh/using_helm/#installing-helm that the Helm client is on your path. See the Helm quickstart guide for instructions.
The below steps require that you navigate to the Jenkins dashboard via the browser.
Install required plug-ins if they are not already installed.
Add credentials required by the pipeline script. Go to Credentials->System, create the following credentials using the IDs outlined below:
Create the pipeline and add the script. To do this, under New Item, enter a name and choose to create Pipeline. In the pipeline's configuration, save the following changes:
Run the pipeline script. There are 2 ways to trigger the pipeline script to run.
The app's authentication service is implemented as part of the API Gateway and supports both basic email and facebook authentication by using Azure AAD B2C.
From the end user's perspective, the app provides a login link that when clicked, redirects to Azure's sign in\up page. This page allows the user to sign in using either an AAD email account or using their Facebook account. Similarly, if the user doesn't have an account, they may choose to sign up. Lastly, AAD B2C also easily supports resetting passwords as needed.
The API Gateway acts as the primary entry point into the server by providing a wrapper over all calls to the microservices' endpoints. The advantage of this approach is:
The gateway is responsible for ensuring that the user is authenticated before it calls into each microservice; this way, none of the microservices themselves need to worry about authenticating the user.
The microservices' endpoints are not exposed publicly; only the API Gateway is able to access these endpoints which helps make the server more secure.
The auth service is implemented using Passport and Passport-Azure-AD npm packages.
To configure AAD, follow these steps - this is required in order to log in\log out of the app and to complete a sticker order:
Register the Sticker App with AAD B2C
In addition, refer to the 'Create an application' and 'Create your policies' sections that are included here
1.) To troubleshoot runtime issues:
2.) To troubleshoot deployment issues:
3.) Common deployment errors: a. "Unable to mount volumns for pod...: timeout expired waiting for volumes to attach/mount for pod...Error syncing pod, skipping..."