Closed toby-brilliant closed 3 weeks ago
This issue has been automatically marked as stale due to inactivity. The resources of our volunteers are limited. Bug reports must provide a script that reproduces the bug, using our template. Feature suggestions must include a promise to build the feature yourself. Thank you for all your contributions.
Is your feature suggestion related to a problem? Please describe.
I'm trying to migrate an application that has ~5 million version records from YAML to JSON to take advantage of a bunch of the quality of life features PaperTrail provides. I've gotten to the point where my schema looks like this:
I have asynchronously backfilled
object_json
andobject_changes_json
so they capture the same information as the YAML, and am only writing YAML for new records. I've gotten this far by monkey patching PaperTrail.To deploy this in a zero-downtime capacity, I have to incrementally cut over. Conventionally this is done by moving the reads over to the new column, then dropping the old column. Unfortunately, PaperTrail directly references the column names in code like:
which means I can't do that. I think one possible way out is to monkey patch everywhere PaperTrail directly references these columns, but this feels like something anyone attempting a zero-downtime migration would face.
Describe the solution you'd like to build
Make all references to
object
andobject_changes
as column names parameters that could be overridden by configuration.Describe alternatives you've considered