Lately I've been noticing makedumpfile running on large vmcores (hundreds of GB), possibly taking a few hours, and ends up saving only a couple MB. The end result is effectively we have double the space used, since the newly stripped vmcore is approx the same size as the original, and this new file defeats our built-in deduplication algorithm (inode based).
I doubt it's possible ahead of time to know how long makedumpfile will run, or what amount of space it will save. But we should be able to add some sort of heuristic, so that the stripped file is kept only if it saves a certain significant %age of the original file size (maybe 10%?).
Lately I've been noticing makedumpfile running on large vmcores (hundreds of GB), possibly taking a few hours, and ends up saving only a couple MB. The end result is effectively we have double the space used, since the newly stripped vmcore is approx the same size as the original, and this new file defeats our built-in deduplication algorithm (inode based).
I doubt it's possible ahead of time to know how long makedumpfile will run, or what amount of space it will save. But we should be able to add some sort of heuristic, so that the stripped file is kept only if it saves a certain significant %age of the original file size (maybe 10%?).