Closed iScriptLex closed 3 years ago
Hi,
thanks for pointing this out!
The problem here is that the tar header field for (sym|hard) link targets is exactly 100 bytes in size and it is perfectly valid to completely fill it without leaving space for a null-terminator, which currently results in data of the adjacent field to be appended to the link target.
The preferable fix for the problem would be replacing the strdup
with an strndup
that copies at most sizeof(hdr->linkname)
and adds a null-terminator if there is none.
With this patch I can process the example tarball you uploaded:
0001-Fix-libtar-treatment-of-link-targets-that-fill-the-h.patch.gz
Thank you. I tested your fix on a complex directories structure which contains more than 11000 links. It was processed correctly.
When length of the hardlink target is exactly 100 characters, tar2sqfs can't process such link and falls with an error "No such file or directory". This error appears during the postprocessing stage. So, you can archive more than 100Gb of data and only after this long process it will drop... I created a simple example of such tar file (see attachment) to reproduce this bug. err_repr.tar.gz You can do this:
gzip -dc err_repr.tar.gz | tar2sqfs tst.sqfs
and error appears:
tar unpacks this archive without any troubles.
Upd: I found some workaround for dirty fixing of this bug. Open file lib/tar/read_header.c. Find this if-block in the decode_header function:
and add one code line after this, so it should be now:
After compiling, it works smoothly.