There was a bug that prevented opening more than 1024 columns because it was growing the wrong buffer. However, we could still running into OS limitation where there are too many columns in file to open all file handles, in that case it will fall back to deferred file tracking which will open columns on demand. This significantly degrades performance because files need to be opened, flushed and opened again for each column that wasn't yet open. This will do for now because the user is likely insane to have such large tables but there is still a faster solution where we further divide the horizontal slice dump task into vertical slice sub-tasks. But that solution is non-trivial, requires additional information about each vertical partition of each row to always be present in memory and at the moment just not worth investing time optimizing for.
There was a bug that prevented opening more than
1024
columns because it was growing the wrong buffer. However, we could still running into OS limitation where there are too many columns in file to open all file handles, in that case it will fall back to deferred file tracking which will open columns on demand. This significantly degrades performance because files need to be opened, flushed and opened again for each column that wasn't yet open. This will do for now because the user is likely insane to have such large tables but there is still a faster solution where we further divide the horizontal slice dump task into vertical slice sub-tasks. But that solution is non-trivial, requires additional information about each vertical partition of each row to always be present in memory and at the moment just not worth investing time optimizing for.