This repository contains Dockerfiles of AET images and example Docker Swarm manifest that enables setting up simple AET instance. You may find released versions of AET Docker images at Docker Hub.
Following section describes how to run AET using Docker Swarm. Alternative to this is installing AET using Helm. See AET Helm chart repository for more deails.
Make sure you have running Docker Swarm instance that has at least 4 vCPU
and 8 GB of memory
available. Read more in Prerequisites.
Follow these instructions to set up local AET instance:
example-aet-swarm.zip
and unzip the files to the folder from where docker stack will be deployed (from now on we will call it AET_ROOT
).> You may run following script to automate this step: > ```bash > curl -sS `curl -Ls -o /dev/null -w %{url_effective} https://github.com/malaskowski/aet-docker/releases/latest/download/example-aet-swarm.zip` > aet-swarm.zip \ > && unzip -q aet-swarm.zip && mv example-aet-swarm/* . \ > && rm -d example-aet-swarm && rm aet-swarm.zip > ``` > > Contents of the `AET_ROOT` directory should look like: > ``` > ├── aet-swarm.yml > ├── bundles > │ └── aet-lighthouse-extension.jar > ├── configs > │ ├── com.cognifide.aet.cleaner.CleanerScheduler-main.cfg > │ ├── com.cognifide.aet.proxy.RestProxyManager.cfg > │ ├── com.cognifide.aet.queues.DefaultJmsConnection.cfg > │ ├── com.cognifide.aet.rest.helpers.ReportConfigurationManager.cfg > │ ├── com.cognifide.aet.runner.MessagesManager.cfg > │ ├── com.cognifide.aet.runner.RunnerConfiguration.cfg > │ ├── com.cognifide.aet.vs.mongodb.MongoDBClient.cfg > │ ├── com.cognifide.aet.worker.drivers.chrome.ChromeWebDriverFactory.cfg > │ └── com.cognifide.aet.worker.listeners.WorkersListenersService.cfg > ├── features > │ └── healthcheck-features.xml > ├── secrets > │ └── KARAF_EXAMPLE_SECRET > └── report > ``` > - If you are using docker-machine (otherwise ignore this point) > you should change `aet-swarm.yml` `volumes` section for the `karaf` service to: > ```yaml > volumes: > - /osgi-configs/configs:/aet/configs # when using docker-machine, use mounted folder > ``` > > You can find older versions in the [release](https://github.com/malaskowski/aet-docker/releases) section.
AET_ROOT
run docker stack deploy -c aet-swarm.yml aet
.Note, that you can always stop the instance by running 'docker stack rm aet' without loosing the data (volumes).
> When it is ready, you should see the `HEALTHY` information in the [Karaf health check](http://localhost:8181/health-check) > > You may also check the status of Karaf by executing > > ```bash > docker ps --format "table {{.Image}}\t{{.Status}}" --filter expose=8181/tcp > ``` > > When you see status `healthy` it means Karaf is running correctly > > ``` > IMAGE STATUS > malaskowski/aet_karaf:1.0.0 Up 3 minutes (healthy) > ```
Simply run:
docker run --rm malaskowski/aet_client
You should see similar output:
Suite started with correlation id: example-example-example-1611937786395
[16:29:46.578]: COMPARED: [success: 0, total: 0] ::: COLLECTED: [success: 0, total: 1]
Suite processing finished
Report url:
http://localhost/report.html?company=example&project=example&correlationId=example-example-example-1611937786395
Open the url which will show your first AET report! Find more about the report in the AET Docs.
Read more on how to run your custom suite in the Running AET Suite section.
User Documentation
Hosts Apache ActiveMQ that is used as the communication bus by the AET components.
Hosts BrowserMob proxy that is used by AET to collect status codes and inject headers to requests.
Hosts Apache Karaf OSGi applications container.
It contains all AET modules (bundles): Runner, Workers, Web-API, Datastorage, Executor, Cleaner and runs them within OSGi context
with all their dependencies required (no internet access required to provision).
AET application core is located in the /aet/core
directory.
All custom AET extensions are kept in the /aet/custom
directory.
Before the start of a Karaf service, Docker secrets are exported to environment variables. Read more in secrets section.
Runs Apache Server that hosts AET Report.
The AET report application is placed under /usr/local/apache2/htdocs
.
Defines very basic VirtualHost
(see aet.conf).
AET bash client embedded into Docker image with all its dependencies (jq
, curl
, xmllint
).
To see the details of what contains sample AET Docker Swarm instance, read the example-aet-swarm readme.
Notice - this instruction guides you on how to set up AET instance using single-node swarm cluster. This setup is not recommended for production use!
docker swarm init
.4 vCPU
and 8 GB of memory
available. Read more in Minimum requirements section.To run example AET instance make sure that machine you run it at has at least enabled:
4 vCPU
8 GB of memory
How to modify Docker resources:
Thanks to the mounted OSGi configs you may now configure instance via AET_ROOT/configs
configuration files.
com.cognifide.aet.cleaner.CleanerScheduler-main.cfg Read more here.
com.cognifide.aet.proxy.RestProxyManager.cfg Configures Proxy Server address. AET uses proxy for some features such as collecting status codes or modyfing request's header. Read more here.
com.cognifide.aet.queues.DefaultJmsConnection.cfg Configures JMS Server connection.
com.cognifide.aet.rest.helpers.ReportConfigurationManager.cfg
Configures address for the Reports module. The reportDomain
property should point to the externall address of the AET Reports service.
com.cognifide.aet.runner.MessagesManager.cfg Configures JMX endpoint of the JMS Server for managing messages via API.
com.cognifide.aet.runner.RunnerConfiguration.cfg Configures AET Runner.
com.cognifide.aet.vs.mongodb.MongoDBClient.cfg
Configures Database connection. Additionally, setting allowAutoCreate
allows creating new databases by AET (no need to create them manually first, including indexes).
com.cognifide.aet.worker.drivers.chrome.ChromeWebDriverFactory.cfg
Configures Selenium Grid Hub address. Additionally enables configuring capabilities via chromeOptions
.
com.cognifide.aet.worker.listeners.WorkersListenersService.cfg Configures number of AET Workers. Use those properties to scale up and down your AET instance's throughput. Read more below.
AET instance speed depends on the direct number of browsers in the system and its configuration.
Let's define a TOTAL_NUMBER_OF_BROWSERS
which will be the number of selenium grid node instances
multiplied by NODE_MAX_SESSION
set for each node. For this default configuration, we have 6
Selenium Nodee replicast with a single instance of browser available on each node:
chrome:
...
environment:
...
- NODE_MAX_SESSION=1
...
deploy:
replicas: 6
...
So, the TOTAL_NUMBER_OF_BROWSERS
is 6
(6 replicas x 1 session
).
That number should be set for following configs:
maxMessagesInCollectorQueue
in com.cognifide.aet.runner.RunnerConfiguration.cfg
collectorInstancesNo
in com.cognifide.aet.worker.listeners.WorkersListenersService.cfg
To read secrets from /run/secrets/
on Karaf startup, configure env KARAF_SECRETS_ON_STARTUP=true
. This will enable scanning secrets from directory matching KARAF_*
pattern and export them as environment variable.
See the Karaf entrypoint for details.
E.g.
If the file /run/secrets/KARAF_MY_SECRET
is found, its content will be exported to MY_SECRET
environment variable.
You may update configuration files directly from your host (unless you use docker-machine, see the workaround below). Karaf should automatically notice changes in the config files.
To update instance to the newer version
aet-swarm.yml
and/or configuration files in the AET_ROOT
.docker stack deploy -c aet-swarm.yml aet
docker-machine config changes detection workaround
Please notice that when you are using docker-machine and Docker Tools, Karaf does not detect automatic changes in the config. You will need to restart Karaf service after applying changes in the configuration files (e.g. by removing
aet_karaf
service and running stack deploy).
There are couple of ways to start AET Suite.
You may use an image that embeds AET Bash client together with its dependencies by running:
docker run --rm malaskowski/aet_client
This will run a sample AET Suite. You should see the results in less than 30s.
To run your custom suite, let's say my-suite.xml
, located in the current directory, you need to bind mount it as volume.
docker run --rm -v "$(pwd)/my-suite.xml:/aet/suite/my-suite.xml" malaskowski/aet_client http://host.docker.internal:8181 /aet/suite/my-suite.xml
> The last 2 argumetns are AET Bash client arguments: > - `http://host.docker.internal:8181` URL of the AET instance, > - `/aet/suite/my-suite.xml` path to the suite XML file inside the container. > > > Notice that we are using here `host.docker.internal:8181` as the address of AET instance - that works only for Docker for Mac/Win > > with local AET setup (this is also the default value for this property). In other cases, use the AET server's IP/domain. > > One more thing you may want to do is to preserve `redirect.html` and `xUnit.xml` files after the AET Client container's run ends its execution. Simply bind mound another volume e.g.: > > `docker run --rm -v "$(pwd)/my-suite.xml:/aet/suite/my-suite.xml" -v "$(pwd)/report:/aet/report" malaskowski/aet_client http://host.docker.internal:8181 /aet/suite/my-suite.xml` > > The results will be saved to the `report` directory: > > ``` > . > ├── my-suite.xml > ├── report > │ ├── redirect.html > │ └── xUnit.xml > ```
To run AET Suite simply define endpointDomain
to AET Karaf IP with 8181
port, e.g.:
./aet.sh http://localhost:8181
ormvn aet:run -DendpointDomain=http://localhost:8181
Read more about running AET suite here.
aet-swarm.yml
and config files over time! Use version control system
(e.g. GIT) to keep tracking changes of AET_ROOT
contents./data/db
regularly!admin/admin
)karaf/karaf
)http://localhost/report.html?params...
Note, that if you are using Docker Tools there will be your docker-machine ip instead of
localhost
If you want to see what's deployed on your instance, you may use dockersamples/visualizer
by running:
docker service create \
--name=viz \
--publish=8090:8080/tcp \
--constraint=node.role==manager \
--mount=type=bind,src=/var/run/docker.sock,dst=/var/run/docker.sock \
dockersamples/visualizer
http://localhost:8090
Note, that if you are using Docker Tools there will be your docker-machine ip instead of
localhost
To debug bundles on Karaf set environment vairable KARAF_DEBUG=true
and expose port 5005
on karaf service.
You may preview AET logs with docker service logs aet_karaf -f
.
Make sure you have installed all prerequisites for the script client.
Set the mongoURI
property in the configs/com.cognifide.aet.vs.mongodb.MongoDBClient.cfg
to point your mongodb instance uri.
After you set up external Selenium Grid, update the seleniumGridUrl
property in the configs/com.cognifide.aet.worker.drivers.chrome.ChromeWebDriverFactory.cfg
to Grid address.
Set report-domain
property in the com.cognifide.aet.rest.helpers.ReportConfigurationManager.cfg
to point the domain.
AET Web API is hosed by the AET Karaf instance.
In order to avoid CORS errors from the Report Application, AET Web API is exposed by the AET Report Apache Server (ProxyPass
).
By default it is set to work with Docker cluster managers such as Swarm or Kubernetes and points to http://karaf:8181/api
.
Use AET_WEB_API
environment variable to change the URL of the AET Web API.
Notice: those changes will impact your machine resources, be sure to extend the number of CPUs and memory if you scale up a number of browsers.
- Spawn more browsers by increasing number of Selenium Grid nodes or adding sessions to existing nodes. Calculate new
TOTAL_NUMBER_OF_BROWSERS
- Set
maxMessagesInCollectorQueue
inconfigs/com.cognifide.aet.runner.RunnerConfiguration.cfg
to newTOTAL_NUMBER_OF_BROWSERS
.- Set
collectorInstancesNo
inconfigs/com.cognifide.aet.worker.listeners.WorkersListenersService.cfg
to newTOTAL_NUMBER_OF_BROWSERS
.- Update instance (see how to do it.
External Selenium Grid node instance should have:
Check the address of the machine, where AET stack is running. By default, Selenium Grid HUB should be
available on the 4444
port. Use this IP address when you run node, with command
(replace {SGRID_IP}
with this IP address):
java -Dwebdriver.chrome.driver="<path/to/chromedriver>" -jar <path/to/selenium-server-standalone.jar> -role node -hub http://{SGRID_IP}:4444/grid/register -browser "browserName=chrome,maxInstances=10" -maxSession 10
You should see the message that node joins selenium grid.
Check it via selenium grid console: http://{SGRID_IP}:4444/grid/console
Read more about setting up your own Grid here:
Yes, AET system is a group of containers that form an instance together. You need a way to organize them and make visible to each other in order to have functional AET instance. This repository contains example instance setup with Docker Swarm, which is the most basic containers cluster manager that comes OOTB with Docker.
For more advanced setups of AET instance I'd recommend to look at Kubernetes or OpenShift systems (including services provided by cloud vendors). In that case you may find AET Helm chart helpful.
build.sh {tag}
.
You should see following images:
malaskowski/aet_report:{tag}
malaskowski/aet_karaf:{tag}
malaskowski/aet_browsermob:{tag}
malaskowski/aet_activemq:{tag}
In order to be able to easily deploy AET artifacts on your docker instance follow these steps:
In the aet-swarm.yml
under karaf
and report
services there are volumes defined:
karaf:
...
volumes:
- ./configs:/aet/custom/configs
- ./bundles:/aet/custom/bundles
- ./features:/aet/custom/features
...
report:
...
# volumes: <- volumes not active by default, to develop the report, uncomment it before deploying
# - ./report:/usr/local/apache2/htdocs
bundles
directoryfeatures
configs
directory already contains default configsreport
directoryTo develop AET application core, add additional volumes to the karaf
service:
karaf:
...
volumes:
...
- ./core-configs:/aet/core/configs
- ./core-bundles:/aet/core/bundles
- ./core-features:/aet/core/features
and place proper AET artifacts accordingly to the core-
directories.
If you use build command with
-Pzip
parameter, all needed artifacts will be placed inYOUR_AET_REPOSITORY/zip/target/packages-X.X.X-SNAPSHOT/
. You only need to unpack needed zip archives into proper catalogs described in step 3.
docker stack deploy -c aet-swarm.yml aet
.