OpenLiberty / guide-okd

(Deprecated) A guide on how to use Minishift to deploy microservices to an Origin Community Distribution of Kubernetes (OKD) cluster.
https://openliberty.io/guides/okd.html
2 stars 2 forks source link

// Copyright (c) 2020, 2023 IBM Corporation and others. // Licensed under Creative Commons Attribution-NoDerivatives // 4.0 International (CC BY-ND 4.0) // https://creativecommons.org/licenses/by-nd/4.0/ // // Contributors: // IBM Corporation // :projectid: okd :page-layout: guide-multipane :page-duration: 1 hour :page-releasedate: 2020-01-17 :page-description: Explore how to deploy microservices on an OKD cluster using a virtual machine with Minishift. :page-tags: ['Kubernetes', 'Docker', 'Cloud'] :page-permalink: /guides/{projectid} :page-related-guides: ['cloud-openshift', 'istio-intro', 'microprofile-istio-retry-fallback', 'kubernetes-intro', 'containerize'] :common-includes: https://raw.githubusercontent.com/OpenLiberty/guides-common/prod :source-highlighter: prettify :page-seo-title: Deploying Java microservices to an OKD cluster using Minishift :page-seo-description: A getting started tutorial and an example on how to deploy Java microservices to an Origin Community Distribution of Kubernetes (OKD) cluster using Minishift for development purposes. :guide-author: Open Liberty = Deploying microservices to an OKD cluster using Minishift

[.hidden] NOTE: This repository contains the guide documentation source. To view the guide in published form, view it on the https://openliberty.io/guides/{projectid}.html[Open Liberty website^].

Explore how to use Minishift to deploy microservices to an Origin Community Distribution of Kubernetes (OKD) cluster.

// *****

== What you'll learn

You will learn how to deploy two simple microservices with Open Liberty to an Origin Community Distribution of Kubernetes (OKD) cluster that is running in Minishift.

=== What is Origin Community Distribution of Kubernetes (OKD)?

OKD, formerly known as OpenShift Origin, is the upstream open source project for all OpenShift products. OKD is a Kubernetes-based platform with added functionality. OKD streamlines the DevOps process by providing an intuitive development pipeline. It also provides integration with multiple tools to make the deployment and management of cloud applications easier.

To learn more about OKD, check out the https://www.okd.io/[official OKD page^]. To learn more about the different platforms that Red Hat OpenShift offers, check out the https://docs.openshift.com[official OpenShift documentation^]. If you would like to learn more about Kubernetes, check out the https://openliberty.io/guides/kubernetes-intro.html[Deploying microservices to Kubernetes^] guide.

Using Maven, you will build the system microservice that collects basic system properties from your system and the inventory microservice that will interact with the system microservice. Then, you will learn how to deploy both to the cluster and establish communication between them.

You will use Minishift, a tool for you to run OKD on a local system. Minishift allows developers to deploy a quick and easy OKD cluster for application development.

Minishift is based on OKD 3.11. To run OKD 4.1 or newer on your local system, you can use the CodeReady Containers tool instead. To learn how to use CodeReady Containers, check out the https://openliberty.io/guides/openshift-codeready-containers.html[Deploying microservices to OpenShift using CodeReady Containers^] guide.

// *****

== Additional prerequisites

The following tools need to be installed:

To verify that Minishift is installed correctly, run the following command:

[role=command]

minishift version

The output is similar to:

[role="no_copy"]

minishift v1.34.1+c2ff9cb

To verify that Docker is installed correctly, run the following command:

[role=command]

docker version

The output is similar to:

[role="no_copy"]

Client: Docker Engine - Community Version: 19.03.5

// *****

//== Getting Started

[role=command] include::{common-includes}/gitclone.adoc[]

// *****

== Starting Minishift

=== Deploying the cluster

Run the following command to start Minishift and create the OKD cluster:

[role=command]

minishift start

If the cluster started successfully, you see the following output:

[role="no_copy"]

Server Information ... OpenShift server started.

The server is accessible via web console at: https://192.168.99.103:8443/console

=== Logging in to the cluster

To interact with the OpenShift cluster, you need to use the oc command. Minishift already includes the oc binary. To use the oc commands, run the following command to configure your PATH to include the binary:

[role=command]

minishift oc-env

The resulting output differs based on your OS and environment, but you get an output similar to the following:

include::{common-includes}/os-tabs.adoc[] [.tab_content.linux_section]

[role="no_copy"]

export PATH="/root/.minishift/cache/oc/v3.11.0/linux:$PATH"

Run this command to configure your command-line session:

eval $(minishift oc-env)


Run the appropriate command to configure your environment:

[role=command]

eval $(minishift oc-env)

--

[.tab_content.mac_section]

[role="no_copy"]

export PATH="/Users/guidesbot@ibm.com/.minishift/cache/oc/v3.11.0/darwin:$PATH"

Run this command to configure your command-line session:

eval $(minishift oc-env)


Run the appropriate command to configure your environment:

[role=command]

eval $(minishift oc-env)

--

[.tab_content.windows_section]

[role="no_copy"]

SET PATH=C:\Users\guides-bot.minishift\cache\oc\v3.11.0\windows;%PATH% REM Run this command to configure your command-line session: REM @FOR /f "tokens=*" %i IN ('minishift oc-env') DO @call %i

Run the appropriate command to configure your environment:

[role=command]

@FOR /f "tokens=*" %i IN ('minishift oc-env') DO @call %i

--

You can run through the development cycle by using OpenShift's web console through the URL provided in the command output of the minishift start command. You can also run the following command to open the web console:

[role=command]

minishift console

You can log in with the following credentials:

[role="no_copy"]

User: developer Password: [any value]

The web console provides a GUI alternative to their CLI tools that you can explore on your own. This guide will continue with the CLI tools.

You can confirm your credentials by running the oc whoami command. You will get developer as your output.

Next, create a new OpenShift project by running the following command:

[role=command]

oc new-project [project-name]

You are now ready to build and deploy a microservice.

// *****

== Building the system microservice

//file 0 Dockerfile [source, Text, linenums, role="code_column"]

include::finish/system/Dockerfile[]

A simple microservice named system will be packaged, containerized, and deployed onto the OKD cluster. The system microservice collects the JVM properties of the host machine.

Navigate to the start directory.

The source code of the system and inventory microservices is located at the system and inventory directories. Focus on the system microservice first, and you will learn about the inventory microservice later.

In the start directory, run the following command to package the system microservice:

[role=command]

mvn -pl system package

The mvn package command compiles, verifies, and builds the project. The resulting compiled code is packaged into a war web archive that can be found under the system/target directory. The archive contains the application that is needed to run the microservice on an Open Liberty, and it is now ready to be injected into a Docker container for deployment.

== Containerizing the system microservice

=== Reusing the Docker daemon

To simplify the local deployment process, you can reuse the built-in Minishift Docker daemon. Reusing the Minishift Docker daemon allows you to use the internal Docker registry, so you don't have to build a Docker registry on your machine. To reuse the Docker daemon, run the following command to point your command-line session to use the Minishift's daemon:

[role=command]

minishift docker-env

The result of the command is a list of bash environment variable exports that configure your environment to reuse the Docker daemon inside the single Minishift VM instance. The commands differ based on your OS and environment, but you get an output similar to the following example:

include::{common-includes}/os-tabs.adoc[] [.tab_content.mac_section.linux_section]

[role=no_copy]

export DOCKER_TLS_VERIFY="1" export DOCKER_HOST="tcp://192.168.99.103:2376" export DOCKER_CERT_PATH="/Users/guidesbot@ibm.com/.minishift/certs"

Run this command to configure your command-line session:

eval $(minishift docker-env)


Run the eval command to configure your environment:

[role=command]

eval $(minishift docker-env)

--

[.tab_content.windows_section]

[role=no_copy]

SET DOCKER_TLS_VERIFY=1 SET DOCKER_HOST=tcp://9.26.69.218:2376 SET DOCKER_CERT_PATH=C:\Users\maihameed.minishift\certs REM Run this command to configure your command-line session: REM @FOR /f "tokens=*" %i IN ('minishift docker-env') DO @call %i

Run the given @FOR command to configure your environment:

[role=command]

@FOR /f "tokens=*" %i IN ('minishift docker-env') DO @call %i

--

=== Building the Docker image

include::{common-includes}/ol-kernel-docker-pull.adoc[]

Now that the environment is set up, ensure that you are in the start directory and run the following command to build the system Docker image:

[role=command]

docker build -t system system/

The command builds an image named system from the [hotspot file=0]Dockerfile provided in the system directory. To verify that the images are built, run the following command to list all local Docker images:

[role=command]

docker images

Your system image should appear in the list of all Docker images:

[role=no_copy]

REPOSITORY TAG IMAGE ID CREATED SIZE system latest e8a8393e9364 2 minutes ago 399MB

=== Accessing the internal registry

To run the microservice on the OKD cluster, you need to push the microservice image into a container image registry. You will use the OpenShift integrated container image registry called OpenShift Container Registry (OCR). First, you must authenticate your Docker client to your OCR. Start by running the login command:

include::{common-includes}/os-tabs.adoc[] [.tab_content.mac_section.linux_section]

[role=command]

echo $(oc whoami -t) | docker login -u developer --password-stdin $(oc registry info)

--

[.tab_content.windows_section]

Because the Windows command prompt doesn’t support the command substitution that is displayed for Mac and Linux, run the following commands: [role=command]

oc whoami
oc whoami -t
oc registry info

Replace the square brackets in the following docker login command with the results from the previous commands: [role=command]

docker login -u [oc whoami] -p [oc whoami -t] [oc registry info]

--

Now you must tag and push your system microservice to the internal registry so that it is accessible for deployment. Run the following command to tag your microservice:

include::{common-includes}/os-tabs.adoc[] [.tab_content.mac_section.linux_section]

[role=command]

docker tag system $(oc registry info)/$(oc project -q)/system

--

[.tab_content.windows_section]

Run the following commands:
[role=command]

oc registry info
oc project -q

Replace the square brackets in the following docker tag command with the results from the previous commands: [role=command]

docker tag system [oc registry info]/[oc project -q]/system

--

Your newly tagged image should appear in the list of all Docker images:

[role=no_copy]

REPOSITORY TAG IMAGE ID CREATED SIZE system latest e8a8393e9364 2 minutes ago 399MB 172.30.1.1:5000/my-project/system latest e8a8393e9364 2 minutes ago 399MB

Now push your newly tagged image to the internal registry by running the following command:

include::{common-includes}/os-tabs.adoc[] [.tab_content.mac_section.linux_section]

[role=command]

docker push $(oc registry info)/$(oc project -q)/system

--

[.tab_content.windows_section]

Run the following commands: [role=command]

oc registry info
oc project -q

Replace the square brackets in the following docker push command with the results from the previous commands: [role=command]

docker push [oc registry info]/[oc project -q]/system

--

The microservice is now ready for deployment.

== Deploying the system microservice

Now that the system Docker image is built, deploy it using a resource configuration file. Since OKD is built on top of Kubernetes, it supports the same concepts and deployment strategies. The OpenShift oc CLI tool supports most of the same commands as the Kubernetes kubectl tool. To learn more about Kubernetes and resource configuration files, check out the https://openliberty.io/guides/kubernetes-intro.html[Deploying microservices to Kubernetes^] guide.

The provided [hotspot file=0]deploy.yaml configuration file outlines a [hotspot=deployment file=0]deployment resource that creates and deploys a container named [hotspot=container file=0]system-container. This container will run the Docker-formatted image provided in the [hotspot=image file=0]image field. The [hotspot=image file=0]image field should point to your newly pushed image.

//file 0 deploy.yaml [source, yaml, linenums, role='code_column hide_tags=everythingButSystemDeployment']

include::finish/deploy.yaml[]

Run the following command to view the image stream:

[role=command]

oc get imagestream

You should find your newly pushed image:

[role=no_copy]

NAME DOCKER REPO TAGS UPDATED system 172.30.1.1:5000/my-project/system latest 5 minutes ago

The OpenShift image stream displays all the Docker-formatted container images that are pushed to the internal registry. You can configure builds and deployments to trigger when an image is updated.

[role="code_command hotspot file=0", subs="quotes"]

Update the deploy.yaml file in the start directory.

deploy.yaml

[role="edit_command_text"] The system [hotspot=image file=0]image field specifies the name and tag of the container image that you want to use for the system container. Update the value of the system [hotspot=image file=0]image field to specify the image location found in the DOCKER REPO column from the output of the following command:

[role=command]

oc get imagestream

After you update the value of the system [hotspot=image file=0]image field, run the following command to apply the configuration file and create your OpenShift resource:

[role=command]

oc apply -f deploy.yaml

You get an output similar to the following example:

[role=no_copy]

deployment.apps/system-deployment created

Run the following command to view your pods:

[role=command]

oc get pods

Ensure that your system-deployment pod is Running:

[role=no_copy]

NAME READY STATUS RESTARTS AGE system-deployment-768f95cf8f-fnjjj 1/1 Running 0 5m

Run the following command to get more details on your pod:

[role=command]

oc describe pod system-deployment

The pod description includes an events log, which is useful in debugging any issues that might arise. The log is formatted similar to the following example:

[role=no_copy]

Events: Type Reason Age From Message


Normal Scheduled 1d default-scheduler Successfully assigned my-project/system-deployment-768f95cf8f-fnjjj to localhost Normal Pulling 1d kubelet, localhost pulling image "172.30.1.1:5000/my-project/system" Normal Pulled 1d kubelet, localhost Successfully pulled image "172.30.1.1:5000/my-project/system" Normal Created 1d kubelet, localhost Created container Normal Started 1d kubelet, localhost Started container

The container is deployed successfully, but it's isolated and cannot be accessed for requests. A service needs to be created to expose your deployment so that you can make requests to your container. You also must expose the service by using a route so that external users can access the microservice through a hostname. Update your [hotspot file=1]deploy.yaml file to include service and route resources.

[role='code_command hotspot file=1', subs="quotes"]

Update the deploy.yaml file in the start directory.

deploy.yaml

[role="edit_command_text"] Update the configuration file to include the [hotspot=systemService file=1]service and [hotspot=systemRoute file=1]route resources.

//file 1 deploy.yaml [source, yaml, linenums, role='code_column hide_tags=inventoryResources']

include::finish/deploy.yaml[]

To update your resources, run the following command:

[role=command]

oc apply -f deploy.yaml

Notice that the cluster only picks up changes, and doesn't tear down and rebuild the deployment if it hasn't changed:

[role=no_copy]

deployment.apps/system-deployment unchanged service/system-service created route/system-route created

You can view all of your routes by running the following command:

[role=command]

oc get routes

You get an output similar to the following example:

[role=no_copy]

NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD system-route system-route-my-project.192.168.99.103.nip.io system-service None

Access your microservice through the hostname provided in the output, by going to the http://[hostname]/system/properties URL, or running the following command. In the following command, replace [hostname] with the hostname provided by the oc get routes command.

[role=command]

curl http://[hostname]/system/properties

// *****

== Deploying the inventory microservice

//file 0 deploy.yaml [source, yaml, linenums, role="code_column"]

include::finish/deploy.yaml[]

Now that the system microservice is running, you will package and deploy the inventory microservice, which adds the properties from the system microservice to the inventory. This process demonstrates how to establish communication between pods inside a cluster.

=== Building the microservice

In the start directory, run the following command to package the inventory microservice:

[role=command]

mvn -pl inventory package

=== Containerizing the microservice

Run the following command to use the inventory Dockerfile to create an image:

[role=command]

docker build -t inventory inventory/

Next, tag and push the image to the internal registry.

include::{common-includes}/os-tabs.adoc[] [.tab_content.mac_section.linux_section]

Run the following command to tag your microservice: [role=command]

docker tag inventory $(oc registry info)/$(oc project -q)/inventory

Now push your newly tagged image to the internal registry by running the following command: [role=command]

docker push $(oc registry info)/$(oc project -q)/inventory

--

[.tab_content.windows_section]

Run the following commands: [role=command]

oc registry info
oc project -q

Replace the square brackets in the following command with the results from the previous commands to tag your microservice: [role=command]

docker tag inventory [oc registry info]/[oc project -q]/inventory

Run the following command to push your microservice, ensuring to replace the square brackets: [role=command]

docker push [oc registry info]/[oc project -q]/inventory

--

The microservice is now ready for deployment.

=== Deploying the microservice

You can use the same [hotspot file=0]deploy.yaml configuration file to deploy multiple microservices. Update the configuration file to include the deployment, service, and route resources for your inventory microservice.

[role="code_command hotspot file=0", subs="quotes"]

Update the deploy.yaml file in the start directory.

deploy.yaml

[role="edit_command_text"] Update the configuration file to add the [hotspot=inventoryResources file=0]inventory resources. Make sure to update the inventory [hotspot=inventoryImage file=0]image field with the appropriate image link found in the DOCKER REPO column from the output of the following command:

[role=command]

oc get imagestream

Now run the following command to allow the cluster to pick up the new changes:

[role=command]

oc apply -f deploy.yaml

Run the following command to get the hostname of the newly exposed inventory service:

[role=command]

oc get route inventory-route

You get an output similar to the following example:

[role=no_copy]

NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD inventory-route inventory-route-myproject.127.0.0.1.nip.io inventory-service None

Go to the following http://[hostname]/inventory/systems URL or run the following curl command to view the current inventory. In the curl command, replace the [hostname] with your appropriate hostname:

[role=command]

curl http://[hostname]/inventory/systems

You see a JSON response like the following example. Your JSON response might not be formatted. The sample output was formatted for readability:

[role=no_copy]

{ "systems": [], "total": 0 }

Since this is a fresh deployment, there are no saved systems in the inventory. Go to the http://[hostname]/inventory/systems/system-service URL or run the following command to allow the inventory microservice to access the system microservice and save the system result in the inventory:

[role=command]

curl http://[hostname]/inventory/systems/system-service

You receive your JVM system properties as a response. Go to the following http://[hostname]/inventory/systems URL or run the following command to recheck the inventory:

[role=command]

curl http://[hostname]/inventory/systems

You see the following response:

[role=no_copy]

{ "systems": [ { "hostname": "system-service", "properties": { "os.name": "Linux", "user.name": "unknown" } } ], "total": 1 }

Notice that the system count incremented by 1 and provided a list of a few key fields that are retrieved from the system response.

== Testing the microservices

pom.xml [source, xml, linenums, role='code_column']

include::finish/inventory/pom.xml[]

A few tests are included for you to test the basic functions of the microservices. If a test failure occurs, then you might have introduced a bug into the code. To run the tests, wait for all pods to be in the ready state before you proceed further. The default parameters that are defined in the [hotspot file=0]pom.xml file are:

[cols="15, 100", options="header"] |=== | Parameter | Description | [hotspot=systemIP]system.ip | IP or hostname of the system-service Kubernetes Service | [hotspot=inventoryIP]inventory.ip | IP or hostname of the inventory-service Kubernetes Service |===

Use the following command to run the integration tests against your running cluster:

include::{common-includes}/os-tabs.adoc[] [.tab_content.windows_section]

Run the following command, noting the values of the system and inventory route hostnames:

[role=command]

oc get routes

Substitute [system-route-hostname] and [inventory-route-hostname] with the appropriate values and run the following command:

[role=command]

mvn verify -Ddockerfile.skip=true -Dsystem.ip=[system-route-hostname] -Dinventory.ip=[inventory-route-hostname]

-- [.tab_content.mac_section.linux_section]

[role=command]

mvn verify -Ddockerfile.skip=true \
-Dsystem.ip=$(oc get route system-route -o=jsonpath='{.spec.host}') \
-Dinventory.ip=$(oc get route inventory-route -o=jsonpath='{.spec.host}')

--

If the tests pass, you see an output for each service similar to the following example:

[source, role="no_copy"]


T E S T S

Running it.io.openliberty.guides.system.SystemEndpointIT Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.673 sec - in it.io.openliberty.guides.system.SystemEndpointIT

Results:

Tests run: 2, Failures: 0, Errors: 0, Skipped: 0

[source, role="no_copy"]


T E S T S

Running it.io.openliberty.guides.inventory.InventoryEndpointIT Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.222 sec - in it.io.openliberty.guides.inventory.InventoryEndpointIT

Results:

Tests run: 4, Failures: 0, Errors: 0, Skipped: 0

== Tearing down the environment

When you no longer need your deployed microservices, you can use the same configuration file to delete them. Run the following command to delete your deployments, systems, and routes:

[role=command]

oc delete -f deploy.yaml

To completely delete your Minishift VM, cluster, and all associated files, refer to the official https://docs.okd.io/latest/minishift/getting-started/uninstalling.html[Uninstalling Minishift^] documentation.

To revert back to your default Docker settings, simply close your command-line session.

== Great work! You're done!

You just deployed two microservices running in Open Liberty to an OKD cluster using the oc tool.

// Multipane include::{common-includes}/attribution.adoc[subs="attributes"]