Closed ericvsmith closed 2 days ago
If a zip file contains "pkg/foo.py" but no "pkg/" entry, it will not be possible for "pkg" to be a namespace package portion.
For a (very) brief discussion on the strategy to implement this, see: http://mail.python.org/pipermail/import-sig/2012-May/000528.html
See also test_namespace_pkgs.py ZipWithMissingDirectory.test_missing_directory which is currently marked as expectedFailure.
Here is a patch that synthesises the directory names at the point where file names are read in. The unit test now passes, and has had the expected failure removed.
Patch collaboration with Diarmuid Bourke \diarmuidbourke@gmail.com\ at the europython sprint.
Please see attached new patch, based on review comments.
This can significant slowdown zipimport. I think we shouldn't support such broken zip files in zipimport.
How common are such broken zip files? Like Serhiy, I'm concerned about the possible negative impact on the interpreter startup time as we try to second guess the contents of the zip file manifest.
It seems better to be explicit that we consider such zipfiles broken and they need to be regenerated with full manifests (perhaps providing a script in Tools that fixes them).
OTOH, the scan time should be short relative to the time needed to read the manifest in the first place - an appropriate microbenchmark may also be adequate to address my concerns.
I don't think such files are common: I've never seen such a file "in the wild". I created one, by accident, while testing PEP-420.
OTOH, it was surprisingly easy to create the malformed file with zipfile.
Why are zipfiles without entries for directories broken? When you don't care about directory permissions (such as when the zipfile won't be extracted at all) the entries for directories are not necessary. Also, AFAIK the zipfile specification \http://www.pkware.com/documents/casestudies/APPNOTE.TXT\ does not require adding directory entries to the zipfile.
FWIW: the zipfiles created by py2app do no contain entries for directories at the moment. I'll probably add entries for directories in the next update to work around this issue.
Just a note: the zip files produced by the distutils and friends (sdist, bdist_dumb, eggs) do not include entries for plain directories. I would guess that this is also true for wheels at the moment, unless something was specifically done to work around this property of distutils-generated zip files. So ISTM the right thing to do is to synthesize the entries at directory read time, when they're being looped over anyway.
Reviewing the patch, there is a performance optimization possible by making a slight change to the algorithm. Currently the patch loops from the start of the string to the end, looking for path prefixes. This means that the total overall performance is determined by the length of the strings and especially the average directory depth.
However, there is a significant shortcut possible: looping from the *end* of each string to the beginning, it's possible to break out of the loop if the prefix has already been seen -- thus saving (depth-1) dictionary lookups in the average case, and only looking at the characters in the base filename, unless a new directory is encountered... for a typical overhead of one unicode substring, dictionary lookup, and strrchr per zipfile directory entry. (Which is very small compared to what else is going on at that point in the process.)
To elaborate, if you have paths of the form:
x/y/a x/y/b x/y/c/d
Then when processing 'x/y/a', you would first process x/y -- it's not in the dict, add it. Then x -- not in the dict, add it. Then you go to x/y/b, your first parent is x/y again -- but since it's in the dict you skip it, and don't even bother with the x. Next you see x/y/c, which is not in the dict, so you add it, then x/y, which is, so you break out of the loop for that item.
Basically, about all that would change would be the for() loop starting at the end of the string and going to the beginning, with the loop position still representing the end of the prefix to be extracted. And the PyDict_Contains check would result in a break rather than a continue.
So, if the only concern keeping the patch from being accepted is that it adds to startup time, this approach would cut down quite a bit on the overhead for generating the path information, in cases of repeated prefixes. (And in the common cases for zipfile use on sys.path, one would expect to see a lot of common prefixes, if only for package names.)
The problem appears to be more general. zipimport fails for deeper hierarchies, even with directory entries.
With the supplied patch (zipimport-issue14905-2.patch) I see the following:
$ unzip -l foo.zip
Archive: foo.zip
Length Date Time Name
--------- ---------- -----
0 2013-04-03 17:28 a/b/c/foo.py
0 2013-04-03 17:34 a/
0 2013-04-03 17:34 a/b/
0 2013-04-03 17:34 a/b/c/
--------- -------
0 4 files
$ ls
foo.zip
$ PYTHONPATH=foo.zip ~/dev/cpython/python
Python 3.4.0a0 (default:3b1dbe7a2aa0+, Apr 3 2013, 17:31:54)
[GCC 4.8.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import a
>>> import a.b
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ImportError: No module named 'a.b'
>>>
I've raised bpo-17633 to track the issue in my last message.
zipimport have been rewritten in pure Python (bpo-25711).
bpo-34738 fixes distutils.
One version of the bug described here (and fixed in the old implementation under bpo-17633) exists in the Python implementation of zipimport:
$ unzip -l namespace1.zip
Archive: namespace1.zip
Length Date Time Name
--------- ---------- -----
0 08-13-2020 06:30 one/
0 08-13-2020 06:30 one/two/
0 08-13-2020 06:30 one/two/three.py
--------- ------- 0 3 files $ unzip -l namespace2.zip Archive: namespace2.zip Length Date Time Name --------- ---------- ----- ---- 0 08-13-2020 06:37 alpha/beta/gamma.py --------- -------
0 1 file
$ PYTHONPATH=namespace1.zip:namespace2.zip ./python
Python 3.10.0a0 (heads/master:c51db0ea40, Aug 13 2020, 06:41:20)
[GCC 4.8.5 20150623 (Red Hat 4.8.5-39)] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import one
>>> import alpha
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ModuleNotFoundError: No module named 'alpha'
>>>
In short, imports where there's no separate entry for directories in the zip file don't work.
Any opinions on whether this *is* the problem this issue is trying to track?
I stumbled onto this issue while working on https://github.com/python/importlib_resources/issues/287. In https://github.com/python/importlib_resources/commit/496acc1a0d8018c830b30a3a28826c9b101975fa, I factored out the zip files that the tests use to dynamically generate the zip fixtures so that I could extend the tests to have namespace support. I then pointed it at the namespace fixtures (https://github.com/python/importlib_resources/commit/f0e9a45c96c74c7f9364b8e5bb7aa701a1fdb04c), but was surprised when namespacedata01
wasn't importable.
I additionally encountered the issue previously in #80921.
When I was developing zipfile.Path
, I found that zipfiles without explicit directory entries were common enough that I added support for them (https://github.com/jaraco/zipp/issues/4) and later developed performance optimizations to minimize the performance impacts.
Perhaps that work can be re-purposed to give the zip importer the CompleteDirs treatment and resolve this issue.
get_data()
and adds many new tests.
Note: these values reflect the state of the issue at the time it was migrated and might not reflect the current state.
Show more details
GitHub fields: ```python assignee = None closed_at = None created_at =
labels = ['3.8', 'type-feature', 'library']
title = "zipimport needs to support namespace packages when no 'directory' entry exists"
updated_at =
user = 'https://github.com/ericvsmith'
```
bugs.python.org fields:
```python
activity =
actor = 'pconnell'
assignee = 'none'
closed = False
closed_date = None
closer = None
components = ['Library (Lib)']
creation =
creator = 'eric.smith'
dependencies = []
files = ['26302', '26894']
hgrepos = []
issue_num = 14905
keywords = ['patch']
message_count = 16.0
messages = ['161543', '161544', '161584', '164861', '168568', '182245', '182254', '182255', '182279', '182326', '185799', '185936', '186026', '325724', '325764', '375307']
nosy_count = 14.0
nosy_names = ['barry', 'gregory.p.smith', 'pje', 'ronaldoussoren', 'ncoghlan', 'jerub', 'eric.smith', 'Arfrever', 'eric.snow', 'serhiy.storchaka', 'jpaugh', 'pconnell', 'isoschiz', 'superluser']
pr_nums = []
priority = 'normal'
resolution = None
stage = 'needs patch'
status = 'open'
superseder = None
type = 'enhancement'
url = 'https://bugs.python.org/issue14905'
versions = ['Python 3.8']
```
Linked PRs