This introduces initial support for kafkasql snapshotting.
This follows the below approach:
A call to the endpoint /admin/config/triggerSnapshot is made. For KafkaSql, this sends a snapshot marker message with a snapshot id to the journal topic, meaning that, at that point, a snapshot was triggered.
The consumer thread reads the message above and creates a sql dump of the H2 database, storing it in the configured location, and sends a message to a snapshots topic using the snapshotId above as the key of this message. This topic is used to keep track of all the snapshots.
At application startup, the snapshots topic is consumed by looking for the most recent snapshot. If there's one present, the application loads the sql dump retrieved from the location in the message and restores the internal database using it. This is done without actually calling any initialization process in the sql database. This is important for upgrade procedures.
Once the database has been restored, the consumer thread starts consuming all messages present in the journal topic, skipping all the messages until the corresponding snapshot marker message is found.
The rest of the messages on top of the snapshot are processed as normal and dispatched to the sql storage as appropriate.
One note:
The snapshot is currently triggered using an endpoint in the admin path that returns nothing, just the 200 if it's ok.
PS: Don't be scared by the 22k lines changed, 22,007 of them are just the snapshot I added for the unit test.
This introduces initial support for kafkasql snapshotting.
This follows the below approach:
/admin/config/triggerSnapshot
is made. For KafkaSql, this sends a snapshot marker message with a snapshot id to the journal topic, meaning that, at that point, a snapshot was triggered.One note:
PS: Don't be scared by the 22k lines changed, 22,007 of them are just the snapshot I added for the unit test.