68 put us in better standing, but stats will still drift over time.
Options:
1 - do nothing, the user can periodically re-sync stats by delete from dux_column_stats_ops; delete from dux_column_stats_values; delete from dux_column_stats;
2 - trigger-based indexing
3 - manual re-index when user submits edits through our UI
I think I'm most curious to try (2). min / max will still drift out of sync, I think I'm OK with that.
The approach I'm thinking of: add AFTER triggers for INSERT, UPDATE and DELETE. They should shove a row into dux_row_stats with a copy of the OLD and NEW rows. Our worker will discover items in this queue and index them.
Maintaining the triggers will be a bit fiddly, but I think this is all very doable.
Something loike:
CREATE TABLE dux_pending_rows(
id integer primary key,
table text not null,
old text,
new text,
timestamp integer not null default strftime('%Y-%m-%d %H:%M:%fZ')
);
CREATE TRIGGER dux_insert_table1_v1 AFTER INSERT ON table FOR EACH ROW BEGIN
INSERT INTO dux_pending_rows("table", "old", "new") VALUES ("table", NULL,
json_object('id', new.id, 'name', new.name, ...)
END;
68 put us in better standing, but stats will still drift over time.
Options:
1 - do nothing, the user can periodically re-sync stats by
delete from dux_column_stats_ops; delete from dux_column_stats_values; delete from dux_column_stats;
2 - trigger-based indexing 3 - manual re-index when user submits edits through our UII think I'm most curious to try (2).
min
/max
will still drift out of sync, I think I'm OK with that.The approach I'm thinking of: add AFTER triggers for INSERT, UPDATE and DELETE. They should shove a row into
dux_row_stats
with a copy of the OLD and NEW rows. Our worker will discover items in this queue and index them.Maintaining the triggers will be a bit fiddly, but I think this is all very doable.
Something loike: