Using this repo, you can automate the creation process of MongoDB ReplicaSets (optionally with Sharding Enabled) deployed inside a Docker Swarm using Vagrant (VirtualBox) and Ansible, bootstrapping the following High-Availablity architecture of VMs ready for production use:
/data/db-01
- data-01 Volume
/data/db-02
- data-02 Volume
/data/db-03
- data-03 Volume
/data/config-01
- config-01 Volume
/data/config-02
- config-02 Volume
/data/config-03
- config-03 Volume
cd: Continous Delivery VM via which software is installed on the other VMs (Docker, Docker Swarm, Mongo, App etc)
manager-01: Primary docker manager node
manager-02: Secondary docker manager node
data-01: First data node - datars replica set with shards optionally
data-02: Second data node - datars replica set with shards optionally
data-03: Third data node - datars replica set with shards optionally
config-01: First config node - configrs replica set
config-02: Second config node - configrs replica set
config-03: Third config node - configrs replica set
mongo-01: First mongo node (Router)
mongo-02: Second mongo node
VirtualBox installed
Vagrant installed
The following ports are available on the Host where you deploy these VMs:
In addition, you need to create the Shared folders (between the Physical Host and your VMs) that will be used as Docker Volumes:
/data/db-01
- data-01 Volume
/data/db-02
- data-02 Volume
/data/db-03
- data-03 Volume
/data/config-01
- config-01 Volume
/data/config-02
- config-02 Volume
/data/config-03
- config-03 Volume
e.g. mkdir db-01 db-02 db-03 config-01 config-02 config-03 & chmod 666 db-* config-*
vagrant up
-> Start all VMs and install Docker on them
vagrant ssh cd
-> login (ssh) to cd VM in order to bootstrap the Docker Swarm setupansible-playbook /vagrant/ansible/docker.yml -i /vagrant/ansible/hosts/cluster
vagrant ssh cd
-> login (ssh) to cd VM in order to bootstrap the Docker Swarm setup [skip this step if your are still logged in to cd VM]ansible-playbook /vagrant/ansible/swarm.yml -i /vagrant/ansible/hosts/cluster
vagrant ssh cd
-> login (ssh) to cd VM in order to bootstrap the Docker Swarm setup [skip this step if your are still logged in to cd VM]ansible-playbook /vagrant/ansible/mongo.yml -i /vagrant/ansible/hosts/cluster
You can test the Write/Read to the Mongo Cluster, you can use the deployed Mongo Log API NodeJS app from the docker repository gjovanov/mongo-log-api
, also available in the /app folder.
This API app is deployed in the cluster on VMs mongo-01 and mongo-02 via the same docker stack file: /ansible/roles/mongo-swarm/docker_stack.yml
curl -d '{ }' -H "Content-Type: application/json" -X POST http://localhost:8091/api/log/write
curl http://localhost:8091/api/log/read?size=20
const db = mongoose.createConnection('mongodb://m-01:27017,m-02:27017/log', {
family: 4, // this is important as it default is IPv6 and it slows down drastically (DNS lookup)
useNewUrlParser: true
})
mongo "mongodb://mongo-01:27017,mongo-02:27017/log"
In case you have some of the old versions of this repo, when Mongo and Mongos Networks were created with Subnet Mask:
255.255.255.0
You will need to manually fix this.
To check if you have old subnet masks run:
ifconfig
e.g. the output can look like this:
vboxnet0 Link encap:Ethernet HWaddr 0a:00:27:00:00:00
inet addr:10.100.198.1 Bcast:10.100.255.255 Mask:255.255.255.0
inet6 addr: fe80::800:27ff:fe00:0/64 Scope:Link
UP BROADCAST MULTICAST MTU:1500 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:331 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:0 (0.0 B) TX bytes:27054 (27.0 KB)
To fix it you will need to run:
vboxmanage hostonlyif ipconfig vboxnet0 --ip 10.100.198.1 --netmask 255.255.0.0
You will need to run for all vboxnet networks with mask 255.255.255.0