Open amogh-jahagirdar opened 1 week ago
cc @singhpk234
Just as a gut comment, if we just compressed them shouldn't we get almost all the benefits we are looking for? They are just a bunch of strings so the binary representation of all of them should be pretty compressible.
Just as a gut comment, if we just compressed them shouldn't we get almost all the benefits we are looking for? They are just a bunch of strings so the binary representation of all of them should be pretty compressible.
It's true the broadcast would be compressed by default via spark.broadcast.compress
and that would minimize the space pretty well. I think the concern is more so when we need to load the map broadcast variable on executor side we'd ultimately need to decompress all the chunks of the map. So the goal for relativization was to minimize how much would take in the in-memory representation of the map after decompression. Let me know if that makes sense.
It's true the broadcast would be compressed by default via
spark.broadcast.compress
and that would minimize the space pretty well. I think the concern is more so when we need to load the map broadcast variable on executor side we'd ultimately need to decompress all the chunks of the map. So the goal for relativization was to minimize how much would take in the in-memory representation of the map after decompression. Let me know if that makes sense.|
Won't it all be in memory anyway? Java should do string interning with prefix?
This is a follow up to https://github.com/apache/iceberg/pull/11273/files#
Instead of broadcasting a map with absolute paths for data files and delete files to executors, we could shrink the memory footprint by relativizing the in-memory mapping, and then just prior to lookup on executors, reconstruct the absolute path as for the relevant delete files.
There are a few ways to go about relativization, in the current implementation I just did the simplest thing which was to relativize to the table location. There are more sophisticated things that could be done to save even more memory consumer from paths such as relativize according to the data file location (requires surfacing more details from LocationProvider), find the longest common prefix between all data/delete files in the rewritable deletes (requires a double pass over tasks, once to identify the longest common prefix via smallest/largest lexicographical strings, and then another to actually reconstruct the delete files). Patricia tries are another possibility though the serialized representation seems to take about the same amount of memory, not sure why that's the case.
I'm also working on identifying if using spark bytestobytes offheap map will save us even more memory but in the mean time thought it made sense to at least get this improvement in the interim. This is all internal, so we can always remove it down the line if something better comes along.