Indaba puts your stakeholder and expert network at your fingertips. It converts their knowledge into data that you can analyze, publish, and use to make decisions.
Indaba has two seed scripts, seed.js
and test-seed.js
. The first only creates the superuser and system messaging user, both on Indaba and on the Auth service. The second will create a slew of test users and a test organization, again on both services.
You will need a completed .env
to run the script, including the AUTH_SERVICE_SEED_ADMIN_*
set. Run seed.js
first to create the super user, then run test-seed.js
after to populate the test users, if needed.
--no-cache
option.docker run --name indaba-postgres -it -e POSTGRES_PASSWORD=indaba -p 5432:5432 -v /Amida/greyscale/backend/db_setup:/shared-volume -d postgres:9.6.5-alpine
--name indaba-postgres
--name <new container name that will be seen by running 'docker ps'>
-ti
("terminal interactive" mode; properly formats output when you are using an interactive shell inside)
-e POSTGRES_PASSWORD=indaba
-e <environment variable assignment. This is the same as running 'export POSTGRES_PASSWORD=indaba'> from inside the docker image
-p 5432:5432
-p <port forwarded from container>:<port forwarded to on local machine>
-v /Amida/greyscale/backend/db_setup:/shared-volume
-v <ABSOLUTE path to existing directory on local machine>:<ABSOLUTE path to new directory in container>
-d
(run in detached mode. To attach run 'docker attach <name, as seen by 'docker ps'>'; to dettach ctrl-p, ctrl-q)
postgres:9.6.5-alpine
<docker image as seen by running 'docker ps'>:<docker tag as seen by running 'docker ps'>
-(A previous version of this README.md file did not lock the tag, therefore it is probable that different installations are running different versions of PostgreSQL)
Running the command for the first time should produce a screen similar to the image below (Note: This image is missing the -v and -ti flags)
When running after the Docker image is already present on the Docker server, a result similar to the image below should appear (Note: This image is missing the -v flag)
If you have are running low on disk space or other system resources, run docker ps -a
(the -a means "list all containers, even stopped containers") and check that you don't have a large number of un-needed or un-used docker containers. If you do, run docker rm <name or container id>
for each container. If NONE of the docker containers are useful, run docker rm $(docker ps -a -q)
. Please note, the previous command will give an error message for all running containers. To stop running containers, run docker stop $(docker ps -a -q)
, you may then run docker rm $(docker ps -a -q)
to remove all containers
docker ps
docker exec -ti indaba-postgres bash
createuser -h localhost -U postgres -W -P -s indabauser
createdb -h localhost -U indabauser indaba
psql
to restore an indaba database (from /greyscale/backend/db_setup
):psql -h localhost -U indabauser indaba < schema.indaba.sql
psql -h localhost -U indabauser indaba < /shared-volume/data.indaba.sql
.
.
.
ctrl-d
This will exit the bash shell in the container and return you to the local machine
Most people will not want to re-run the above commands every time they start a new docker container.
Stop the docker container
docker stop indaba-postgres
Tag the new docker image. Remember, "docker ps -q -l" gets the ID of the last stopped container. If the indaba-postgres container was not the last stopped image, or if you are not sure if it was the last stopped image, you can get a list of all stopped docker containers by typing "docker ps -a"
docker commit $(docker ps -q -l) indaba-postgres
Start a new Docker container based on the new image
docker run docker ps -q -l
docker inspect indaba-postgres | grep IPAddress
export AUTH_SALT='nMsDo)_1fh' && export RDS_USERNAME=indabauser && export RDS_PASSWORD=indabapassword && export RDS_HOSTNAME=<ip address above> && export INDABA_PG_DB=indaba && export INDABA_ENV=dev
docker-compose up -d
Confirm everything is running with docker ps
At this point, you will now have a functioning Indaba backend with accompanying services. You can now run a local Indaba client for testing against the backend and services. The client should make sure to set the following vars in the .env
:
If you need to free up space after development, run docker rmi `docker ps -aq`
A list of full environment variable settings is below. They can be either manually set in the shell or can be included in the .env
file. Defaults indicated in paranthesis.
See the paper write-up for instructions on how to deploy with Kubernetes on AWS using Kops
. The kubernetes.yml
file included in the project root directory contains the deployment definition for this project.
The .pgpass
file is needed by the Dockerfile
to run the seed.js
script upon startup. This is only necessary when seeding password protected databases. You will need to ensure that the file is configured with the correct database parameters in this format hostname:port:database:username:password
.
If using AWS' Elasticache and RDS to deploy postgres, make sure the instance is configured with the appropriate security groups to allow traffic from the cluster's instance. The paper doc referenced above describes how this can be done.
NOTE: Container Engine SQL support in Google Cloud is bad right now and will probably change. For this reason, we do not give DB setup instructions here. You may attempt to use Google Cloud SQL, or use a Postgres container as shown above.
Configure gcloud
defaults
gcloud config set project PROJECT_ID
gcloud config set compute/zone ZONE_ID
Launch a cluster
gcloud container clusters create greyscale-cluster --num-nodes=3
# confirm the running nodes
gcloud compute instances list
Set the appropriate environment variables in .env
Use kompose
to convert the docker-compose.yml into kubernetes .yaml files
# in the project root dir
kompose convert
Use kubectl
to deploy the services
# you may need to authenticate first
gcloud auth application-default login
# create the pods
kubectl create -f indaba-backend-service.yaml,indaba-backend-deployment.yaml
# to verify in the kubernetes dashboard:
kubectl proxy
# then navigate to localhost:8001/ui
Cleanup the cluster when you are finished
gcloud container clusters delete greyscale-cluster
docker rmi `docker ps -aq`
Contributors are welcome. See issues https://github.com/amida-tech/greyscale/issues
Licensed under Apache 2.0