python / cpython

The Python programming language
https://www.python.org
Other
63.42k stars 30.37k forks source link

doc: pathological performance using tarfile #62944

Open 6c4a9aad-ad0e-4c85-b086-b584aa64317e opened 11 years ago

6c4a9aad-ad0e-4c85-b086-b584aa64317e commented 11 years ago
BPO 18744
Nosy @gustaebel, @bitdancer
Files
  • tarproblem.py: a script that demonstrates the pathological behavior
  • Note: these values reflect the state of the issue at the time it was migrated and might not reflect the current state.

    Show more details

    GitHub fields: ```python assignee = None closed_at = None created_at = labels = ['easy', '3.11', 'docs', 'performance'] title = 'doc: pathological performance using tarfile' updated_at = user = 'https://bugs.python.org/teamnoir' ``` bugs.python.org fields: ```python activity = actor = 'iritkatriel' assignee = 'docs@python' closed = False closed_date = None closer = None components = ['Documentation'] creation = creator = 'teamnoir' dependencies = [] files = ['31303'] hgrepos = [] issue_num = 18744 keywords = ['easy'] message_count = 7.0 messages = ['195232', '195235', '195277', '195278', '195418', '195424', '195428'] nosy_count = 5.0 nosy_names = ['lars.gustaebel', 'nadeem.vawda', 'r.david.murray', 'docs@python', 'teamnoir'] pr_nums = [] priority = 'normal' resolution = None stage = 'needs patch' status = 'open' superseder = None type = 'performance' url = 'https://bugs.python.org/issue18744' versions = ['Python 3.11'] ```

    6c4a9aad-ad0e-4c85-b086-b584aa64317e commented 11 years ago

    There's a problem with tarfile. Write a program to traverse the contents of a modest sized tar archive. Make sure your tar archive is compressed. Then read the tar archive with your program.

    I'm finding that allowing tarfile to read a compressed archive costs me somewhere on the order of a 60x performance penalty by comparison to opening the file with gzip, then passing the gzip contents to tarfile. Programs that could take a few minutes are literally taking a few hours when using tarfile.

    This seems stupid. The tarfile library could do the same thing I'm doing manually, in fact, I had assumed that it would and was surprised by the performance I was seeing, so I ran with the profiler and saw millions of decompression calls. It's almost as though the tarfile library is decompressing the entire archive for every member extraction.

    Note, you can get even worse performance if you sort the member names and then extract in that order. I'm not sure whether this "should" matter since the tar file order is sequential.

    serhiy-storchaka commented 11 years ago

    Could you please provide a simple script which shows the problem?

    6c4a9aad-ad0e-4c85-b086-b584aa64317e commented 11 years ago

    New info...

    I see the degradation on most of the linux boxes I've tried:

    I see some degradation on MacOsX-10.8.4 but it's in the acceptable range, more like 2x than 60x. That is still suspicious, but not as problematic.

    6c4a9aad-ad0e-4c85-b086-b584aa64317e commented 11 years ago

    Here's a script that tests for the problem.

    serhiy-storchaka commented 11 years ago

    Thank you for the script Richard.

    If you say about performance degradation when extracting a tarfile in changed order this behavior is expected. When you read a gzip file in random order you need seek in it. A gzip file is a singe-direction road. For seeking in a gzip file you need decompress all data between you current position (or from the file start) and target position. In case of random order you need decompress 1/3 tarfile in the mean for every extracted file.

    THe tarfile module can't do anything with this. It can't first extract all file in the memory because uncompressed file can be too big. It can't resort a list of extracted file in natural order because it can change semantic (a tarfile can contains duplicates and symlinks). Just don't do this. Don't extract a large number of files from compressed tarfile in changed order.

    6c4a9aad-ad0e-4c85-b086-b584aa64317e commented 11 years ago

    I see your point.

    The alternative would be to limit the size of archive that can be extracted from to the size of virtual memory, which is essentially what I'm doing manually. Either way, someone will be surprised. I'm not which which way will result in the least surprise since I suspect that far more people will be extracting from compressed archives than will be extracting very large archives. The failure mode with limited file size seems much less frequent but also much more annoying. In comparison, the failure, (and the pathological case is effectively a failure), reading compressed archives seems much more common to me, although granted, not completely a total failure.

    I think this should be mentioned in the doc because I, at least, was extremely surprised by this behavior and it cost me some time to track it down. I might suggest something along the lines of:

    Be careful when working with compressed archives. In order to support the largest file sizes possible, some approaches may result in pathological behavior causing the original archive to be decompressed, in full, many times. You should be able to avoid this behavior if you traverse the TarInfo items in file order. You might also consider decompressing the archive first, in memory, and then handing the memory copy to tarfile for processing.

    serhiy-storchaka commented 11 years ago

    I think in most cases peoples extracts archives in natural order and don't have a failure.

    But adding a warning looks reasonable.