Elasticsearch has a 2GB limit on record size.
We need to add a check in code to not submit records greater than this limit.
This is an edge case that I don't expect to occur in real world use. However, to be safe we should add a check in the code.
If an individual record is over 2GB this check will fail and a debug message will be raised. The document at fault will not be added to the index
Actually closing this as a won't fix.
Via the user interface we can already set the max payload size that we send to Elasticsearch. This isn't exactly the same thing, but for practical use it is enough
Elasticsearch has a 2GB limit on record size. We need to add a check in code to not submit records greater than this limit.
This is an edge case that I don't expect to occur in real world use. However, to be safe we should add a check in the code. If an individual record is over 2GB this check will fail and a debug message will be raised. The document at fault will not be added to the index