xtf-cz / xtf

MIT License
12 stars 55 forks source link

XTF

XTF is a framework designed to ease up aspects of testing in the OpenShift environment.

XTF is an open source project and managed in best-effort mode by anyone who is interested in or is using this project. There is no dedicated maintainer and there is not set time in which any given XTF issues will be fixed.

XTF Maven repository

The XTF repository moved to the JBoss public repository just recently (early 2021) and was previously hosted on Bintray. Please take care of this and update your projects accordingly in order to depend on and use the latest XTF versions, i.e. adjust your XTF repository pom.xml configuration by adding (if not there already) the following snippet:

...
<repository>
  <id>jboss-releases-repository</id>
  <name>JBoss Releases Repository</name>
  <url>https://repository.jboss.org/nexus/content/groups/public/</url>
  <snapshots>
     <enabled>false</enabled>
  </snapshots>
  <releases>
     <enabled>true</enabled>
  </releases>
</repository>

<repository>
  <id>jboss-snapshots-repository</id>
  <name>JBoss Snapshots Repository</name>
  <url>https://repository.jboss.org/nexus/content/repositories/snapshots</url>
  <snapshots>
     <enabled>true</enabled>
  </snapshots>
  <releases>
     <enabled>false</enabled>
  </releases>
</repository>
...

Modules

Core

Core concepts of XTF framework used by other modules.

Configuration

While the framework itself doesn't require any configuration, it can ease up some repetitive settings in tests. Setup of XTF can be done in 4 ways with priority from top to down:

The mapping between system properties and environment variables is done by lower casing environment variable, replacing _ with . and adding xtf. before the result.

Example: OPENSHIFT_MASTER_URL is mapped to xtf.openshift.master.url.

OpenShift

OpenShift class is the entry point for communicating with OpenShift. It extends OpenShiftNamespaceClient from Fabric8 client as it is meant to be used within one namespace, where tests are executed.

The OpenShift class extends the upstream version with several shortcuts, e.g. using DeploymentConfig name only for retrieving any Pod or its log. This is useful in test cases where we know that we have only one pod created by DeploymentConfigs or that we don't care which one will we get. The class itself also provides access to OpenShift specific Waiters.

Configuration:

Take a look at the OpenShiftConfig class to see possible configurations. Enabling some of them will allow you to instantiate as OpenShift openShift = OpenShifts.master().

Pull Secrets

There's a convenient method OpenShift::setupPullSecret() to set up pull secrets as recommended by OpenShift documentation. The property xtf.openshift.pullsecret is checked in the ProjectCreator listener and in BuildManager to populate projects with pull secret if provided. The pull secret is expected to be provided in Json format.

Single registry

{"auths":{"registry.redhat.io":{"auth":"<TOKEN>"}}}

Multiple registries

{"auths":{"registry.redhat.io":{"auth":"<TOKEN>"},"quay.io":{"auth":"<TOKEN>"}}}

Waiters

Waiter is a concept for conditional waiting. It retrieves an object or state in the specified interval and checks for the specified success and failure conditions. When one of them is met, the waiter will quit. If neither is met within the timeout, then an exception is thrown.

XTF provides two different implementations, (SimpleWaiter and SupplierWaiter) and several preconfigured instances. All the default parameters of preconfigured Waiters are overrideable.

OpenShifts.master().waiters().isDcReady("my-deployment").waitFor();

Https.doesUrlReturnsOK("http://example.com").timeOut(TimeUnit.MINUTES, 10).waitFor();

BuildManager

BuildManager caches test builds in one namespace so that they can be reused. After the first time a specified ManagedBuild succeeds,
only the reference is returned, but the build will be already present.

BuildManager bm = new BuildManagers.get();
ManagedBuild mb = new BinaryBuild("my-builder-image", Paths.resolve("/resources/apps/my-test-app"));
ManagedBuildReference = bm.deploy(mb);

bm.hasBuildCompleted().waitFor();

Image

Wrapper class for URL specified images. Its purpose is to parse them or turn them into ImageStream objects.

Specifying Maven

In some images Maven needs to be activated, for example on RHEL7 via script /opt/rh/rh-maven35/enable. This can be controlled by properties.

Not setting these options might result in faulty results from ImageContent#mavenVersion().

Specifying images

Every image that is set in global-test.properties using xtf.{foo}.image can be accessed by using Images.get(foo).

Products

Allows to hold some basic and custom properties related to tested product image in properties file. Example considering maintenance of one image version:

xtf.foo.image=image.url/user/repo:tag
xtf.foo.version=1.0.3

XTF also considers the possibility of maintenance of several versions. In this case add a "subId" to your properties and specify xtf.foo.subid to activate particular properties (in your 'pom.xml' profile for example). Most of the properties can be shared for a given product. While image properties will override version properties.

Example considering maintenance of two image versions:

xtf.foo.image                               // Will override versions image property
xtf.foo.templates.repo=git.repo.url         // Will be used as default if not specified in version property
xtf.foo.v1.image=image.url/user/repoV1:tag1
xtf.foo.v1.version=1.0.3
xtf.foo.v2.image=image.url/user/repoV2:tag2
xtf.foo.v2.version=1.0.3

Retrieving an instance with this metadata: Produts.resolve("product");

Using TestCaseContext to get name of currently running test case

If junit.jupiter.extensions.autodetection.enabled=true then JUnit 5 extension cz.xtf.core.context.TestCaseContextExtension is automatically registered. It sets name of currently running test case into TestCaseContext before @BeforeAll of test case is called.

Following code then can be used to retrieve the name of currently running test case in:

String testCase = TestCaseContext.getRunningTestCaseName()

Automatic creation of namespace(s)

XTF allows to automatically manage creation of testing namespace which is defined by xtf.openshift.namespace property. This namespace is created before any test case is started.

This feature requires to have XTF JUnit5 cz.xtf.junit5.listeners.ProjectCreator extension enabled. This can be done by adding cz.xtf.junit5.listeners.ProjectCreator line into files:

src/test/resources/META-INF/services/org.junit.jupiter.api.extension.Extension
src/test/resources/META-INF/services/org.junit.platform.launcher.PostDiscoveryFilter
src/test/resources/META-INF/services/org.junit.platform.launcher.TestExecutionListener

Run test cases in separate namespaces using xtf.openshift.namespace.per.testcase property

You can enable running each test case in separate namespace by setting xtf.openshift.namespace.per.testcase=true.

Namespace names follow pattern: "${xtf.openshift.namespace}-TestCaseName". For example for xtf.openshift.namespace=testnamespace and test case org.test.SmokeTest it will be testnamespace-SmokeTest.

You can limit the length of created namespace by xtf.openshift.namespace.per.testcase.length.limit property. By default it's 25 chars. If limit is breached then test case name in namespace name is hashed to hold the limit. So namespace name would like testnamespace-s623jd6332

Warning - Limitations

When enabling this feature in your project, you may need to replace OpenShiftConfig.getNamespace() with NamespaceManager.getNamespace(). Check method's javadoc to understand difference.

In case that you're using this feature, consuming test suite must follow those rules to avoid unexpected behaviour when using cz.xtf.core.openshift.OpenShift instances:

Service Logs Streaming (SLS)

This feature allows for you to stream the services output while the test is running; this way you can see immediately what is happening inside the cluster. This is of great help when debugging provisioning, specifically on Cloud environments, which instead would require for you to access your Pods.

Kubernetes/OpenShift implementation

The SLS OpenShift platform implementation relies upon the following fabric8 Kubernetes Client API features:

The expected behavior is to stream the output of all the containers that are started or terminated in the selected namespaces.

Usage

The SLS feature can be configured and enabled either via annotations or via properties. This behavior is provided by the ServiceLogsStreamingRunner JUnit 5 extension. There are two different ways for enabling the SLS functionality, which are summarized in the following sections, please refer to the JUnit 5 submodule documentation in order to read about the extension implementation details.

The @ServiceLogsStreaming annotation (Developer perspective)

Usage is as simple as annotating your test with @ServiceLogsStreaming e.g.:

@ServiceLogsStreaming
@Slf4j
public class HelloWorldTest {
  // ...
}
The xtf.log.streaming.enabled and xtf.log.streaming.config property (Developer/Automation perspective)

You can enable the SLS feature by setting the xtf.log.streaming.enabled property so that it would apply to all the test classes being executed.

Conversely, if the above property is not set, you can set the xtf.log.streaming.config property in order to provide multiple SLS configurations which could map to different test classes.

The xtf.log.streaming.config property value is expected to be a comma (,) separated list of configuration items, each one formatted as a semi-colon (;) separated list of name and value pairs for the above mentioned attributes, where the name/value separator is expected to be the equals char (=). A single configuration item represents a valid source of configuration for a single SLS activation and exposes the following information:

Usage examples

Given what above, enabling SLS for all test classes is possible by executing the following command:

mvn clean install -Dxtf.log.streaming.enabled=true

Similarly, in order to enable the feature for all test classes whose name is ending with "Test" should be as simple as executing something similar to the following command:

mvn clean install -Dxtf.log.streaming.config="target=.*Test"

which would differ in case the logs should be streamed to an output file:

mvn clean install -Dxtf.log.streaming.config="target=.*Test;output=/home/myuser/sls-logs"

or in case you'd want to provide multiple configuration items to map different test classes, e.g.:

mvn clean install -Dxtf.log.streaming.config="target=TestClassA,target=TestClassB.*;output=/home/myuser/sls-logs;filter=.*my-app.*"

JUnit5

JUnit5 module provides a number of extensions and listeners designed to easy up OpenShift images test management. See JUnit5 for more information.

Helm

You can use HelmBinary.execute() method to run Helm against your cluster. Following Helm properties are introduced:

Property name Type Description Default value
xtf.helm.clients.url String URL from which version specified by xtf.helm.client.version https://mirror.openshift.com/pub/openshift-v4/clients/helm
xtf.helm.client.version String Version of the Helm client to be downloaded (from http://[xtf.clients.url]/[xtf.client.version) latest
xtf.helm.binary.path String Path to existing Helm client binary. If absent, it will be downloaded using combination of xtf.helm.clients.url and xtf.helm.client.version parameters

Releasing XTF

Have a look to the release documentation to learn about the process that defines how to release XTF to the community.