Closed Roiocam closed 10 months ago
After several months, I finally have the time to complete this merge. I've thoroughly reviewed the migration's changes, and here are my considerations.
Regarding the event_tag
table, there are two essential components:
ordering
used to sequence events in a specific stream.foreign key
used to load event payloads lazily.lazy event load means join table and query.
In the previous approach, we used the event_id
as the foreign key
. However, this incurred some costs since we had to wait for all events to be inserted before retrieving their IDs. Moreover, the 'slick' plugin had performance issues with 'InsertAndReturn', which led to individual inserts instead of batch inserts.
On the other hand, we can employ the primary key of the event_journal
table instead of the 'ordering' column for the foreign key. This modification removes the need for InsertAndReturn
.
To ensure a rolling updates when shifting to the "new way," we propose a phased rollout with steps controlled by a configuration property:
ALTER TABLE event_tag
ADD PERSISTENCE_ID VARCHAR(255),
ADD SEQUENCE_NUMBER BIGINT;
jdbc-journal.tables.event_tag.redundant-write = true
jdbc-read-journal.tables.event_tag.redundant-read = true
-- drop old fk column
DELETE
FROM event_tag
WHERE PERSISTENCE_ID IS NULL
AND SEQUENCE_NUMBER IS NULL;
-- drop old FK constraint
SELECT CONSTRAINT_NAME
INTO @fk_constraint_name
FROM INFORMATION_SCHEMA.REFERENTIAL_CONSTRAINTS
WHERE TABLE_NAME = 'event_tag';
SET @alter_query = CONCAT('ALTER TABLE event_tag DROP FOREIGN KEY ', @fk_constraint_name);
PREPARE stmt FROM @alter_query;
EXECUTE stmt;
DEALLOCATE PREPARE stmt;
-- drop old PK constraint
ALTER TABLE event_tag
DROP PRIMARY KEY;
-- create new PK constraint for PK column.
ALTER TABLE event_tag
ADD CONSTRAINT
PRIMARY KEY (PERSISTENCE_ID, SEQUENCE_NUMBER, TAG);
-- create new FK constraint for PK column.
ALTER TABLE event_tag
ADD CONSTRAINT fk_event_journal_on_pk
FOREIGN KEY (PERSISTENCE_ID, SEQUENCE_NUMBER)
REFERENCES event_journal (PERSISTENCE_ID, SEQUENCE_NUMBER)
ON DELETE CASCADE;
-- alter the event_id to nullable, so we can skip the InsertAndReturn.
ALTER TABLE event_tag
MODIFY COLUMN EVENT_ID BIGINT UNSIGNED NULL;
jdbc-journal.tables.event_tag.redundant-write = false
jdbc-read-journal.tables.event_tag.redundant-read = false
I did some tests and profiles to verify the improvement of performance.
A flame graph provides two pieces of information: the original approach incurs overhead not only during the commit (insert), but also has cost on during the result coverter.
Furthermore, just 6 samples account for 0.46% + 0.3% of CPU time.(Although this is not strict proof)
After this PR, the most obvious change is the elimination of the overhead in result convert. Additionally, there is an improvement in execution efficiency at the database side (though this cannot be demonstrated by this flame graph).
Could you add the following to a file: akka-persistence-jdbc/core/src/main/mima-filters/5.2.1.backwards.excludes/issue-710-tag-fk.excludes
ProblemFilters.exclude[IncompatibleSignatureProblem]("akka.persistence.jdbc.journal.dao.JournalTables#EventTags.eventId")
ProblemFilters.exclude[IncompatibleResultTypeProblem]("akka.persistence.jdbc.journal.dao.JournalTables#TagRow.eventId")
ProblemFilters.exclude[DirectMissingMethodProblem]("akka.persistence.jdbc.journal.dao.JournalTables#TagRow.copy")
ProblemFilters.exclude[IncompatibleResultTypeProblem]("akka.persistence.jdbc.journal.dao.JournalTables#TagRow.copy$default$1")
ProblemFilters.exclude[IncompatibleResultTypeProblem]("akka.persistence.jdbc.journal.dao.JournalTables#TagRow.copy$default$2")
ProblemFilters.exclude[DirectMissingMethodProblem]("akka.persistence.jdbc.journal.dao.JournalTables#TagRow.this")
ProblemFilters.exclude[MissingTypesProblem]("akka.persistence.jdbc.journal.dao.JournalTables$TagRow$")
ProblemFilters.exclude[DirectMissingMethodProblem]("akka.persistence.jdbc.journal.dao.JournalTables#TagRow.apply")
ProblemFilters.exclude[IncompatibleSignatureProblem]("akka.persistence.jdbc.journal.dao.JournalTables#TagRow.unapply")
There are also some test failures from CI if you can take a look at those?
Could you add the following to a file:
akka-persistence-jdbc/core/src/main/mima-filters/5.2.1.backwards.excludes/issue-710-tag-fk.excludes
of course.
There are also some test failures from CI if you can take a look at those?
Renato has some suggestions on this PR that sparked some thoughts in me: Perhaps we can simplify the rolling update steps by migrating SQL.
I have some thoughts, but I am currently trapped by other issues and don't have the time for it at the moment. I will fix and verify them over the weekend.
The migration guide will be add it on new pull request.
@octonato could you please take a look agin? Thanks.
References #710