The current approach consumers interact with the log is via a JDBC connection to the Postgres database. Each consumer opens a persistent connection to the database for each tenant/org-id. This is required to get the LISTEN/NOTIFY setup to work.
The more tenants we have the more connections a particular consumer needs to open if they're interested in reacting to all instances (e.g. ElasticSearch, CartoDB cosumers). There are scalability issues on this setup.
The preliminary idea is that consumers interact via a HTTP end point, (e.g. GET /events) with some filtering possible to get only the interesting events, e.g.
offset - the offset to start sending events
limit - the limit of events per request (the server will have an internal maximum configured by default)
eventType - a list of interesting events for the consumer
A consumer can opt-in to be notified of new data is available via other channel (e.g. WebSockets or SSE)
The current approach consumers interact with the log is via a JDBC connection to the Postgres database. Each consumer opens a persistent connection to the database for each tenant/org-id. This is required to get the LISTEN/NOTIFY setup to work. The more tenants we have the more connections a particular consumer needs to open if they're interested in reacting to all instances (e.g. ElasticSearch, CartoDB cosumers). There are scalability issues on this setup.
The preliminary idea is that consumers interact via a HTTP end point, (e.g.
GET /events
) with some filtering possible to get only the interesting events, e.g.offset
- the offset to start sending eventslimit
- the limit of events per request (the server will have an internal maximum configured by default)eventType
- a list of interesting events for the consumerA consumer can opt-in to be notified of new data is available via other channel (e.g. WebSockets or SSE)