Closed alafanechere closed 1 year ago
Zendesk ticket #1473 has been linked to this issue.
Comment made from Zendesk by Augustin on 2022-07-11 at 13:03:
I created an [issue](https://github.com/airbytehq/airbyte/issues/14573) on our repo for this error. Please subscribe to receive updates.
This error is also happening while trying to sync engagements in the Hubspot connector.
I encountered the same today (Airbyte 0.40.18, connector version 0.2.3). Any idea when will this be fixed 🙏
I'm on Airbyte 0.4.27 getting the same error on Jira connector (0.3.3)
This error just started breaking the Hubspot sync for us today :(
I am getting the same error. Any fix?
Running into this while syncing HubSpot Companies and Contacts into Redshift.
This is a limitation in Redshift and a solution is now in pre-release https://docs.aws.amazon.com/redshift/latest/dg/limitations-super.html
On 4/18/23, Pranas Ziaukas @.***> wrote:
Running into this while syncing HubSpot Companies and Contacts.
-- Reply to this email directly or view it on GitHub: https://github.com/airbytehq/airbyte/issues/14573#issuecomment-1513568955 You are receiving this because you are subscribed to this thread.
Message ID: @.***>
-- Joseph Brown Database Engineer @.*** 206-579-2889
I am getting this error while synching Jira Issues into Redshift.
Closing this issue as normalization is going away https://github.com/airbytehq/airbyte/issues/26028
normalization is going away
Could you expand a bit by any chance @evantahler?
For example, we had issues with HubSpot (source) records that were flowing to Redshift (destination), and because those records were large JSON objects they'd exceed Redshift's SUPER limit (as far as I understand).
What does the end of normalization imply for the above?
The problem with large source records which can't fit in the destination still remains, regardless of normalization. We'll need to fix it more generally. We are discussing what to do about it here - https://github.com/airbytehq/airbyte/issues/28541
Tell us about the problem you're trying to solve
Redshift's normalization generates SUPER object whose size exceeds the limit of 1MB:
Describe the solution you’d like
Normalization should explicitly drop records > 1MB or restructure these records to make them lower than 1MB.
Seeing a similar issue with on-call airbytehq/alpha-beta-issues#697
SUPER type from Redshift docs
Related forum topic