Open BenoitAverty opened 1 year ago
I am bumping into the same issue with rows with some heavy columns (containing json). Did you find a solution or did you go with second option? I have similar issue, but I need to preserve the data in those tables
Hello,
Context : I'm migrating a database from MySQL to Postgres. Two of the tables in the DB contain lots of rows with some heavy columns (containing json). When I try to migrate the DB, the job fails with the "heap exhausted" message. The data in these tables could be skipped in the migration because it's tracing data that will be quickly regenerated anyway.
I'm using the latest docker image of pgloader.
What I tried :
select * from heavy_table where false
. This works but the materialized views don't have indices and foreign keys of the original tables and I'd need to create them manually with a risk of human error in case they change before the final production migration. Also the materialized view doesn't have the auto-increment to convert the type to serial in the postgres schema.Feature request: Having the option to migrate a table (schema, indices, foreign keys, type casting...) but skip all the data inside.
I would also take a workaround like my second option but without having to manually recreate part of the table. Materialized views are a great mechanism in pgloader but the missing auto-increment and indices is a problem, maybe I just didn't find the correct way to do it ?
Thanks in advance