Background context
Our data streaming pipeline identifies data change events triggered by any application in the Reapit suite of products. This will trigger an ETL operation involving our platform services to populate a secondary data store in the schema of our APIs. This process works well for continual CDC events as they occur, but not for loading a new customer data set into the platform. This is not efficient and a dedicated load procedure would be better.
Specification
Build new lambda service code to bulk load data into our secondary data store
Background context Our data streaming pipeline identifies data change events triggered by any application in the Reapit suite of products. This will trigger an ETL operation involving our platform services to populate a secondary data store in the schema of our APIs. This process works well for continual CDC events as they occur, but not for loading a new customer data set into the platform. This is not efficient and a dedicated load procedure would be better.
Specification
More information TBC.