A benchmark of message queues with data replication and at-least-once delivery guarantees.
Source code for the mqperf article at SoftwareMill's blog: Evaluating persistent, replicated message queues
Tests have been run with the following prerequisites:
via pyenv
)pip install 'ansible==2.9.5'
)pip install boto3
)Message queues and test servers are automatically provisioned using Ansible on AWS. You will need to have the
AWS_ACCESS_KEY_ID
and AWS_SECRET_ACCESS_KEY
present in the environment for things to work properly, as well
as Ansible and Boto installed.
See Creating AWS access key for details.
Please consider the above when configuring message size parameter in test configuration: "msg_size": 100
.
If message is too short, then majority of its content will be the TS information. For that reason, we suggest
configuring message length at 50+ characters.
Test configurations are located under ansible/tests
. Each configuration has a number of parameters
that may influence the test execution and its results.
Note: all commands should be run in the ansible
directory
ansible-playbook install_and_setup_YourQueueName.yml
Note: since AWS SQS is a serverless offering, you don't need to setup anything for it. For SQS, you can skip this step.
Note: you can select EC2 instance type for your tests by setting ec2_instance_type
in the group_vars/all.yml
file
ansible-playbook provision_mqperf_nodes.yml
Note: you can adjust the number of these EC2 instances for your own tests.
WARNING: after each code change, you'll need to remove the fat-jars from the target/scala-2.12
directory and re-run
provision_mqperf_nodes.yml
.
ansible-playbook install_and_setup_prometheus.yml
WARNING: this must be done each time after provisioning new sender / receiver nodes (previous step) so that Prometheus is properly configured to scrape the new servers for metrics
Metrics are gathered using Prometheus and visualized using Grafana.
Accessing monitoring dashboard:
IP:3000/dashboards
in your browseradmin/pass
credentialsMQPerf Dashboard
tests
directorytest_name
in the run_tests.yml
fileansible-playbook run_tests.yml
There are few commands dedicated to cleaning up the cloud resources after the tests execution.
Stopping sender and receiver processing
ansible-playbook stop.yml
Terminating EC2 instances
ansible-playbook shutdown_ec2_instances.yml
Removing all MQPerf-related resources on AWS
ansible-playbook remove_aws_resources.yml
Checking receiver/sender status
ansible-playbook check_status.yml
Running sender nodes only
ansible-playbook sender_only.yml
Running receiver nodes only
ansible-playbook receiver_only.yml
Before running the tests, create the Kafka topics by running ansible-playbook kafka_create_topic.yml
Redpanda requires xfs filesystem, to configure it update storage_fs_type: xfs
in all.yml
file.
Before running the tests, create the Redpanda topics by running ansible-playbook redpanda_create_topic.yml
.
Default partition number in a topic creation script is 64, if you need to adjust it update --partitions 64
param in redpanda_create_topic.yml
script.
Before running the tests, create the required streams and consumer groups by running ansible-playbook redistreams_create_streams.yml
.
This script creates streams named stream0, stream1... stream100. If you need more streams please edit the loop counter.
If you'd like to rerun tests without cluster redeployment use ansible-playbook redistreams_trim_streams.yml
to flush the streams.
To manipulate streams count use streamCount property in test JSON.
Notes. Cluster create command (last step) sometimes fails randomly. It's sometimes easier to run it directly from ec2.
The ack property is set on the Bookkeeper level via the CLI or REST or a startup parameter.
Go to the docs for more details.
Currently, this is not implemented, hence the mq.ack
attribute is ignored.
ansible-playbook install_and_setup_rabbitmq.yml -e erlang_cookie=1234
guest
/guest
) centos
ha.
will be mirroredadmin
/admin
)ActiveMq.scala
)http://<AWS_EC2_PUBLIC_IP>:8161/jolokia/list
- plain JSON content should be visible - to verify if it works.http://<AWS_EC2_PUBLIC_IP>:8161/jolokia/read/org.apache.activemq.artemis:address="mq",broker="<BROKER_NAME>",component=addresses
, where: org.apache.activemq.artemis:address="mq",broker="<BROKER_NAME>",component=addresses
is the key ("
signs are obligatory). To know other keys refer to the previous step. <BROKER_NAME>
typically resolves to AWS_EC2_PRIVATEIP with .
replaced with ``.EventStoreMq
implementationto build the oracleaq module, first install the required dependencies available in your Oracle DB installation
to install a dependency in your local repository, create a build.sbt file:
organization := "com.oracle"
name := "ojdbc6"
version := "1.0.0"
scalaVersion := "2.11.6"
packageBin in Compile := file(s"${name.value}.jar")
Now you can publish the file. It should be available in ~/.ivy2/local/com.oracle/
$ sbt publishLocal
Zookeeper installation contains an ugly workaround for a bug in Cloudera's RPM repositories
(http://community.cloudera.com/t5/Cloudera-Manager-Installation/cloudera-manager-installer-fails-on-centos-7-3-vanilla/td-p/55086/highlight/true).
See ansible/roles/zookeeper/tasks/main.yml
. This should be removed in the future when the bug is fixed by Cloudera.
cd
to ansible/
(where ansible.cfg
is located) and try to run playbook from this location.To run locally execute the Sender and Receiver classes with following:
-Dconfig.file=/tmp/test-config.json
RUN_ID=1;HOST_ID=1