Open vamsikurre opened 5 months ago
I added the issue to the destination backlog. @evantahler if I'm not wrong this is actually being worked on, can you confirm?
No. This is a redshift limitation on the size of the SQL statement that it allows (not the size of the record content). The only solution here would be to de-select some columns or split the sync into 2, producing 2 tables.
As mentioned in the comment at below line, i think this should be destination aware partitioning rather than fixed 10k chunks
This is working when i tried to reduce the chunk size to lower value, I feel this has to be smarter based on the underlying database
Are you using s3-staging files to load data into redshift?
Uploading method is standard
Can you switch to S3? That is likely to work better for larger datasets.
Connector Name
destination-redshift
Connector Version
2.1.7
What step the error happened?
During the sync
Relevant information
When the columns are more even with 10k partitioned insert is hitting the query size limit in Redshift This printed full insert statement multiple times in the log and the size of the log grew to 600MB
Relevant log output
Contribute