Open aaronsteers opened 3 years ago
At my work, we have some huge tables in postgres that we want to ingest into Redshift. There are a few around 5 million rows, a couple are around 50 million, and one is about 5 billion.
The batch messages would speed this up dramatically. At this stage, ingesting several million rows at a time is error-prone for us, so the initial ingestions of tables like this are troublesome. The 5 billion row table is infeasible right now.
Theres been a couple updates in slack related to this:
Yeah, I have been challenged by the lack of maintainer response on the transferwise variant also. I've had an open PR there for a while and there's been no activity in something like 18 months it seems.
More updates in slack https://meltano.slack.com/archives/C013Z450LCD/p1684388483962929.
A few new forks and a user looking to take over maintenance of the default target.
Possible inspiration / starting point: https://github.com/transferwise/pipelinewise-target-redshift