We can think about Postoffice as a real post office. You send messages
to a topic
and publishers
send them to anyone interested in this topic. In case the receiver is not available, Postoffice will try to deliver the message later.
Postoffice uses a pub/sub approach, so instead of handling receiver's addresses it uses topics, to which receivers must subscribe through Publishers
.
A publisher is isolated from others and it handles itself its own pending messages.
This project started as a solution to buffer messages in case some apps are deployed on-premise and could suffer connectivity issues. Then it evolved to also offer a pub/sub mechanism.
This is not designed to be realtime. Postoffice uses GenStage to process pending messages, creating a process tree for each Publisher
. It looks like an ETL, and it's refreshed every 10 seconds.
/api/health
We expose an API to enable projects to create the structure they need to work: topics, publishers and messages.
For both topics and publishers, if the resource already exist we return 409 Conflict
.
In case that another validation error happened, we return 400 bad request
Here we have a sample request to create a topic. All fields are required
POST /api/topics
{
"name": "example-topic",
"origin_host": sender_service.com
}
Attributes:
Publishers creation example. The only non required field is from_now
POST /api/publishers
{
"active": True,
"topic": "example-topic",
"target": "http://myservice.com/examples",
"type": "http/pubsub",
"from_now": True
}
Attributes:
Message creation example. All fields are required
POST /api/messages
{
"topic": topic,
"payload": {},
"attributes": {}
}
Attributes:
To start your Phoenix server:
brew update
brew install elixir
Create the following environment variables in order to start the application:
GOOGLE_APPLICATION_CREDENTIALS
with the absolute path to the pubsub credentials file.GCLOUD_PUBSUB_PROJECT_ID
with the project_id used.MAX_BULK_MESSAGES
with the max number of messages that postoffice is able to consumeNote: There are some dummy credentials that can be used for local development
$ export GOOGLE_APPLICATION_CREDENTIALS=`pwd`/config/dummy-credentials.json
$ export GCLOUD_PUBSUB_PROJECT_ID=fake
$ export MAX_BULK_MESSAGES=10
mix local.hex
mix archive.install hex phx_new 1.4.11
Install dependencies with mix deps.get
Run docker-compose -f docker/docker-compose.yml up -d
to start a new postgres database
Create and migrate your database with mix ecto.setup
Execute npm install
inside assets/
Start Phoenix endpoint with mix phx.server
Now you can visit localhost:4000
from your browser or run tests with mix test
To start postoffice bundle with docker:
make build
make env-start
[1]. Now you can visit localhost:4001
from your browser make test
make view-logs
[1] While make env-start
you can execute make view-logs
in other terminal to show what is happening
GOOGLE_APPLICATION_CREDENTIALS
is the Google Cloud Platform service account json path.GCLOUD_PUBSUB_PROJECT_ID
is the Google Cloud Platform project where your PubSub topics/subscriptions are located.MAX_BULK_MESSAGES
is the maximum number of messages that Postoffice would be able to insert in bulk.CLEAN_MESSAGES_THRESHOLD
defines from what time you want to keep the historical data from the sent_messages
table (in seconds)CLEAN_MESSAGES_CRONTAB
defines when the Oban cronjob to clean historical data from the sent_messages
table should be run. Must be a valid crontab declaration.CLUSTER_NAME
defines cluster name to know the source of historical data in pubsub from different clustersPostoffice has been developed to be used forming a cluster. We use libcluster under the hood to create the cluster. You can take a look at its documentation in case you want to tune settings.
Some desired features have been delayed until the first release:
max_demand
through environment variables.