This playbook install all services of a CUL (postgres, OPV services, Celery and Flower). To use it you have to:
If you want to deploy an developement environment you can make containers with lxc.
The deployLXCdevContainers.sh
script allow you to launch and configure 2 (one master and one worker) lxc containers.
This script will make the hosts file so you can skip next part !
./deployLXCdevContainers.sh
Next you will just have to launch ansible !
The hosts file contain the ip and name of the master and all his workers. As we do not want to maintain a DNS on the CUL, we decided to populate the /etc/hosts file of the host with ansible, so you have to complete the host file as follow:
[ROLE]
HOSTNAME ansible_host=HOST_IP
Two possible roles:
The first entry must be the correct hostname of the HOST !
As example, imagine that we have 3 host: opv1, opv2 and opv3. We want the opv1 host to be our master and other as worker, so we complete the hosts file as follow:
[Master]
opv1 ansible_host=192.168.1.2
[Worker]
opv2 ansible_host=192.168.1.3
opv3 ansible_host=192.168.1.4
For an example, you can look at the hosts_example file.
This file is use to precise some information as the password for database of airflow and OPV.
########################
## Main configuration ##
########################
# The id of the CUL (mallette is the old name of CUL)
idMallette: 42
# The OPV_Master
OPVMaster: OPV_Master
#############################
## Postresql configuration ##
#############################
# The password for opv database
postgresPasswdOPV: Testopv42
##########################
## Celery configuration ##
##########################
celery_concurency: 4
This playbooks will install all .deb package and all the stuff that need internet
./launch_install.sh
This playbooks will:
./launch_configuration.sh
To do the install and configure step as one, you can use:
./launch.sh
Connect to the master:
ssh root@opv_master
On the master:
su - opv
opv-celery-campaign ID_CAMPAIGN ID_MALLETTE