python / cpython

The Python programming language
https://www.python.org/
Other
60.9k stars 29.4k forks source link

Freeze all modules imported during startup. #89183

Closed ericsnowcurrently closed 2 years ago

ericsnowcurrently commented 2 years ago
BPO 45020
Nosy @malemburg, @gvanrossum, @warsaw, @brettcannon, @nascheme, @rhettinger, @terryjreedy, @gpshead, @ronaldoussoren, @ncoghlan, @vstinner, @larryhastings, @tiran, @tiran, @tiran, @methane, @markshannon, @ericsnowcurrently, @indygreg, @lysnikolaou, @pablogsal, @miss-islington, @brandtbucher, @isidentical, @shihai1991, @FFY00, @softsol solutions
PRs
  • python/cpython#28107
  • python/cpython#28320
  • python/cpython#28335
  • python/cpython#28344
  • python/cpython#28345
  • python/cpython#28346
  • python/cpython#28375
  • python/cpython#28380
  • python/cpython#28392
  • python/cpython#28398
  • python/cpython#28410
  • python/cpython#28538
  • python/cpython#28554
  • python/cpython#28583
  • python/cpython#28590
  • python/cpython#28635
  • python/cpython#28664
  • python/cpython#28665
  • python/cpython#28655
  • python/cpython#28940
  • python/cpython#28997
  • python/cpython#29755
  • python/cpython#29755
  • python/cpython#29755
  • Dependencies
  • bpo-45186: Marshal output isn't completely deterministic.
  • bpo-45188: De-couple the Windows builds from freezing modules.
  • Note: these values reflect the state of the issue at the time it was migrated and might not reflect the current state.

    Show more details

    GitHub fields: ```python assignee = 'https://github.com/ericsnowcurrently' closed_at = created_at = labels = ['interpreter-core', 'type-feature', '3.11'] title = 'Freeze all modules imported during startup.' updated_at = user = 'https://github.com/ericsnowcurrently' ``` bugs.python.org fields: ```python activity = actor = 'christian.heimes' assignee = 'eric.snow' closed = True closed_date = closer = 'eric.snow' components = ['Parser'] creation = creator = 'eric.snow' dependencies = ['45186', '45188'] files = [] hgrepos = [] issue_num = 45020 keywords = ['patch'] message_count = 96.0 messages = ['400370', '400371', '400372', '400373', '400374', '400381', '400386', '400394', '400421', '400422', '400427', '400447', '400449', '400450', '400454', '400455', '400459', '400460', '400461', '400469', '400507', '400508', '400510', '400629', '400636', '400638', '400639', '400660', '400664', '400665', '400666', '400667', '400675', '400766', '400769', '400808', '400855', '400856', '401027', '401040', '401734', '401740', '401805', '401811', '401813', '401814', '401848', '401865', '401870', '401905', '401911', '401917', '401918', '401919', '401921', '401954', '401966', '401988', '402017', '402020', '402070', '402080', '402100', '402101', '402103', '402113', '402116', '402118', '402119', '402151', '402356', '402463', '402464', '402476', '402587', '402609', '402629', '402633', '402634', '402898', '402993', '403024', '403025', '403255', '403256', '403323', '403324', '404111', '404164', '404257', '404344', '405002', '405203', '405244', '406143', '406948'] nosy_count = 25.0 nosy_names = ['lemburg', 'gvanrossum', 'barry', 'brett.cannon', 'nascheme', 'rhettinger', 'terry.reedy', 'gregory.p.smith', 'ronaldoussoren', 'ncoghlan', 'vstinner', 'larry', 'christian.heimes', 'christian.heimes', 'christian.heimes', 'methane', 'Mark.Shannon', 'eric.snow', 'indygreg', 'lys.nikolaou', 'pablogsal', 'miss-islington', 'brandtbucher', 'BTaskaya', 'shihai1991', 'FFY00', 'santhu_reddy12'] pr_nums = ['28107', '28320', '28335', '28344', '28345', '28346', '28375', '28380', '28392', '28398', '28410', '28538', '28554', '28583', '28590', '28635', '28664', '28665', '28655', '28940', '28997', '29755', '29755', '29755'] priority = 'normal' resolution = 'fixed' stage = 'resolved' status = 'closed' superseder = None type = 'enhancement' url = 'https://bugs.python.org/issue45020' versions = ['Python 3.11'] ```

    ericsnowcurrently commented 2 years ago

    Currently we freeze the 3 main import-related modules into the python binary (along with one test module). This allows us to bootstrap the import machinery from Python modules. It also means we get better performance importing those modules.

    If we freeze modules that are likely to be used during execution then we get even better startup times. I'll be putting up a PR that does so, freezing all the modules that are imported during startup. This could also be done for any stdlib modules that are commonly imported.

    (also see bpo-45019 and https://github.com/faster-cpython/ideas/issues/82)

    ericsnowcurrently commented 2 years ago

    FYI, with my branch I'm getting a 15% improvement to startup for "./python -c pass".

    ericsnowcurrently commented 2 years ago

    I'm aware of two potentially problematic consequences to this change:

    For the former, I'm not sure there's a way around it. We may consider the inconvenience worth it in order to get the performance benefits.

    For the latter, the obvious solution is to introduce a startup hook (e.g. on the CLI) like we've talked about doing for years. (I wasn't able to find previous discussions on that topic after a quick search.)

    malemburg commented 2 years ago

    Not sure whether you are aware, but the PyRun project I'm maintaining does that and goes beyond this by freezing almost the complete stdlib and statically linking most C extensions into a single binary:

    https://www.egenix.com/products/python/PyRun/

    Startup is indeed better, but not as much as you might think. You do save stat calls and can share resources across processes.

    The big time consumer is turning marshal'ed code objects back into Python objects, though. If that could be made faster by e.g. using a more efficient storage format such as one which is platform dependent, it'd be a much bigger win than the freezing approach.

    ericsnowcurrently commented 2 years ago

    The big time consumer is turning marshal'ed code objects back into Python objects, though. If that could be made faster by e.g. using a more efficient storage format such as one which is platform dependent, it'd be a much bigger win than the freezing approach.

    That's something Guido has been exploring. :)

    See: https://github.com/faster-cpython/ideas/issues/84 (and others)

    ericsnowcurrently commented 2 years ago

    For the latter, the obvious solution is to introduce a startup hook

    I'm not sure why I said "obvious". Sorry about that.

    gvanrossum commented 2 years ago

    I noticed nedbat un-nosied himself. Probably he didn't realize you were calling him out because it's possible this would affect coverage.py?

    gvanrossum commented 2 years ago

    The big time consumer is turning marshal'ed code objects back into Python objects, though. If that could be made faster by e.g. using a more efficient storage format such as one which is platform dependent, it'd be a much bigger win than the freezing approach.

    I've explored a couple of different approaches here (see the issue Eric linked to and a few adjacent ones) and this is a tricky issue. Marshal seems to be pretty darn efficient as a storage format, because it's very compact compared to the Python objects it creates. My final (?) proposal is creating static data structures embedded in the code that just *are* Python objects. Unfortunately on Windows the C compiler balks at this -- the C++ compiler handles it just fine, but it's not clear that we are able to statically link C++ object files into Python without depending on a lot of other C++ infrastructure. (In GCC and Clang this is apparently a language extension.)

    ericsnowcurrently commented 2 years ago

    @Guido, @Mark Shannon, do you recall the other issue where folks objected to that other patch, due to local changes to source files not being reflected?

    Also, one thought that comes to mind is that we could ignore the frozen modules when in a dev environment (and opt in to using the frozen modules when an environment variable).

    markshannon commented 2 years ago

    I don't recall, but...

    You can't modify any builtin modules. Freezing modules effectively makes them builtin from a user's perspective. There are plenty of modules that can't be modified:

    >>> sys.builtin_module_names
    ('_abc', '_ast', '_codecs', '_collections', '_functools', '_imp', '_io', '_locale', '_operator', '_signal', '_sre', '_stat', '_string', '_symtable', '_thread', '_tokenize', '_tracemalloc', '_warnings', '_weakref', 'atexit', 'builtins', 'errno', 'faulthandler', 'gc', 'itertools', 'marshal', 'posix', 'pwd', 'sys', 'time', 'xxsubtype')

    I don't see why adding a few more modules to that list would be a problem.

    Was the objection to feezing *all* modules, not just the core ones?

    gvanrossum commented 2 years ago

    We should ask Neil S. for the issue where Larry introduced this. That might have some discussion.

    But if I had to guess, it’s confusing that you can see *Python* source that you can’t edit (or rather, where editing doesn’t get reflected in the next Python run, unless you also compile it.

    I know that occasionally a debug session I add a print statement to a stdlib module. -- --Guido (mobile)

    ericsnowcurrently commented 2 years ago

    Neil, do you recall the story here?

    gvanrossum commented 2 years ago

    The plot thickens. By searching my extensive GMail archives for Jeethu Rao I found an email from Sept. 14 to python-dev by Larry Hastings titled "Store startup modules as C structures for 20%+ startup speed improvement?"

    It references an issue and a PR:

    https://bugs.python.org/issue34690
    https://github.com/python/cpython/pull/9320

    Here's a link to the python-dev thread:

    https://mail.python.org/pipermail/python-dev/2018-September/155188.html

    There's a lot of discussion there. I'll try to dig through it.

    gvanrossum commented 2 years ago

    Adding Larry in case he remembers more color. (Larry: the key question here is whether some version of this (like the one I've been working on, or a simpler one that Eric has prepared) is viable, given that any time someone works on one of the frozen or deep-frozen stdlib modules, they will have to run make (with the default target) to rebuild the Python binary with the deep-frozen files.

    (Honestly if I were working on any of those modules, I'd just comment out some lines from Eric's freeze_modules.py script and do one rebuild until I was totally satisfied with my work. Either way it's a suboptimal experience for people contributing to those modules. But we stand to gain a \~20% startup time improvement.)

    PS. The top comment links to Eric's work.

    larryhastings commented 2 years ago

    Since nobody's said so in so many words (so far in this thread anyway): the prototype from Jeethu Rao in 2018 was a different technology than what Eric is doing. The "Programs/_freeze_importlib.c" Eric's playing with essentially inlines a .pyc file as C static data. The Jeethu Rao approach is more advanced: instead of serializing the objects, it stores the objects from the .pyc file as pre-initialized C static objects. So it saves the un-marshalling step, and therefore should be faster. To import the module you still need to execute the module body code object though--that seems unavoidable.

    The python-dev thread covers nearly everything I remember about this. The one thing I guess I never mentioned is that building and working with the prototype was frightful; it had both Python code and C code, and it was fragile and hard to get working. My hunch at the time was that it shouldn't be so fragile; it should be possible to write the converter in Python: read in .pyc file, generate .c file. It might have to make assumptions about the internal structure of the CPython objects it instantiates as C static data, but since we'd ship the tool with CPython this should be only a minor maintenance issue.

    In experimenting with the prototype, I observed that simply calling stat() to ensure the frozen .py file hadn't changed on disk lost us about half the performance win from this approach. I'm not much of a systems programmer, but I wonder if there are (system-proprietary?) library calls one could make to get the stat info for all files in a single directory all at once that might be faster overall. (Of course, caching this information at startup might make for a crappy experience for people who edit Lib/*.py files while the interpreter is running.)

    One more observation about the prototype: it doesn't know how to deal with any mutable types. marshal.c can deal with list, dict, and set. Does this matter? ISTM the tree of objects under a code object will never have a reference to one of these mutable objects, so it's probably already fine.

    Not sure what else I can tell you. It gave us a measurable improvement in startup time, but it seemed fragile, and it was annoying to work with/on, so after hacking on it for a week (at the 2018 core dev sprint in Redmond WA) I put it aside and moved on to other projects.

    larryhastings commented 2 years ago

    There should be a boolean flag that enables/disables cached copies of .py files from Lib/. You should be able to turn it off with either an environment variable or a command-line option, and when it's off it skips all the internal cached stuff and uses the normal .py / .pyc machinery.

    With that in place, it'd be great to pre-cache all the .py files automatically read in at startup.

    As for changes to the build process: the most analogous thing we have is probably Argument Clinic. For what it's worth, Clinic hasn't been very well integrated into the CPython build process. There's a pseudotarget that runs it for you in the Makefile, but it's only ever run manually, and I'm not sure there's *any build automation for Windows developers. AFAIK it hasn't really been a problem. But then I'm not sure this is a very good analogy--the workflow for making Clinic changes is very different from people hacking on Lib/.py.

    It might be sensible to add a mechanism that checks whether or not the pre-cached modules are current. Store a hash for each cached module and check that they all match. This could then be part of the release process, run from a GitHub hook, etc.

    gvanrossum commented 2 years ago

    Since nobody's said so in so many words (so far in this thread anyway): the prototype from Jeethu Rao in 2018 was a different technology than what Eric is doing. The "Programs/_freeze_importlib.c" Eric's playing with essentially inlines a .pyc file as C static data. The Jeethu Rao approach is more advanced: instead of serializing the objects, it stores the objects from the .pyc file as pre-initialized C static objects. So it saves the un-marshalling step, and therefore should be faster. To import the module you still need to execute the module body code object though--that seems unavoidable.

    Yes, I know. We're discussing two separate ideas -- Eric's approach, which is doing the same we're doing for importlib for more stdlib modules; and "my" approach, dubbed "deep-freeze", which is similar to Jeethu's approach (details in https://github.com/faster-cpython/ideas/issues/84).

    What the two approaches have in common is that they require rebuilding the python binary whenever you edit any of the changed modules. I heard somewhere (I'm sorry, I honestly don't recall who said it first, possibly Eric himself) that Jeethu's approach was rejected because of that.

    FWIW in my attempts to time this, it looks like the perf benefits of Eric's approach are close to those of deep-freezing. And deep-freezing causes much more bloat of the source code and of the resulting binary. (At runtime the binary size is made up by matching heap savings, but to some people binary size is important too.)

    The python-dev thread covers nearly everything I remember about this. The one thing I guess I never mentioned is that building and working with the prototype was frightful; it had both Python code and C code, and it was fragile and hard to get working. My hunch at the time was that it shouldn't be so fragile; it should be possible to write the converter in Python: read in .pyc file, generate .c file. It might have to make assumptions about the internal structure of the CPython objects it instantiates as C static data, but since we'd ship the tool with CPython this should be only a minor maintenance issue.

    Deep-freezing doesn't seem frightful to work with, to me at least. :-) Maybe the foundational work by Eric (e.g. generating sections of Makefile.pre.in) has helped.

    I don't understand entirely why Jeethu's prototype had part written in C. I never ran it so I don't know what the generated code looked like, but I have a feeling that for objects that don't reference other objects, it would generate a byte array containing the exact contents of the object structure (which it would get from constructing the object in memory and copying the bytes) which was then put together with the object header (containing the refcount and type) and cast to (PyObject *).

    In contrast, for deep-freeze I just reverse engineered what the structures look like and wrote a Python script to generate C code for an initialized instance of those structures. You can look at some examples here: https://github.com/gvanrossum/cpython/blob/codegen/Python/codegen__collections_abc.c . It's verbose but the C compiler handles it just fine (C compilers have evolved to handle *very* large generated programs).

    In experimenting with the prototype, I observed that simply calling stat() to ensure the frozen .py file hadn't changed on disk lost us about half the performance win from this approach. I'm not much of a systems programmer, but I wonder if there are (system-proprietary?) library calls one could make to get the stat info for all files in a single directory all at once that might be faster overall. (Of course, caching this information at startup might make for a crappy experience for people who edit Lib/*.py files while the interpreter is running.)

    I think the only solution here was hinted at in the python-dev thread from 2018: have a command-line flag to turn it on or off (e.g. -X deepfreeze=1/0) and have a policy for what the default for that flag should be (e.g. on by default in production builds, off by default in developer builds -- anything that doesn't use --enable-optimizations).

    One more observation about the prototype: it doesn't know how to deal with any mutable types. marshal.c can deal with list, dict, and set. Does this matter? ISTM the tree of objects under a code object will never have a reference to one of these mutable objects, so it's probably already fine.

    Correct, marshal supports things that you will never see in a code object. Perhaps the reason is that when marshal was invented, it wasn't so clear that code objects should be immutable -- that realization came later, when Greg Stein proposed making them ROM-able. That didn't work out, but the notion that code objects should be strictly mutable (to the python user, at least) was born and is now ingrained.

    Not sure what else I can tell you. It gave us a measurable improvement in startup time, but it seemed fragile, and it was annoying to work with/on, so after hacking on it for a week (at the 2018 core dev sprint in Redmond WA) I put it aside and moved on to other projects.

    I'm not so quick to give up. I do believe I have seen similar startup time improvements. But Eric's version (i.e. this issue) is nearly as good, and the binary bloat is much less -- marshal is way more compact than in-memory objects.

    (Second message)

    There should be a boolean flag that enables/disables cached copies of .py files from Lib/. You should be able to turn it off with either an environment variable or a command-line option, and when it's off it skips all the internal cached stuff and uses the normal .py / .pyc machinery.

    Yeah.

    With that in place, it'd be great to pre-cache all the .py files automatically read in at startup.

    *All* the .py files? I think the binary bloat cause by deep-freezing the entire stdlib would be excessive. In fact, Eric's approach freezes everything in the encodings package, which turns out to be a lot of files and a lot of code (lots of simple data tables expressed in code), and I found that for basic startup time, it's best not to deep-freeze the encodings module except for __init__.py, aliases.py and utf_8.py.

    As for changes to the build process: the most analogous thing we have is probably Argument Clinic. For what it's worth, Clinic hasn't been very well integrated into the CPython build process. There's a pseudotarget that runs it for you in the Makefile, but it's only ever run manually, and I'm not sure there's *any build automation for Windows developers. AFAIK it hasn't really been a problem. But then I'm not sure this is a very good analogy--the workflow for making Clinic changes is very different from people hacking on Lib/.py.

    I think we've got reasonably good automation for both Eric's approach and the deep-freeze approach -- all you need to do is run "make" when you've edited one of the (deep-)frozen modules.

    It might be sensible to add a mechanism that checks whether or not the pre-cached modules are current. Store a hash for each cached module and check that they all match. This could then be part of the release process, run from a GitHub hook, etc.

    I think the automation that Eric developed is already good enough. (He even generates Windows project files.) See https://github.com/python/cpython/pull/27980 .

    larryhastings commented 2 years ago

    What the two approaches have in common is that they require rebuilding the python binary whenever you edit any of the changed modules. I heard somewhere (I'm sorry, I honestly don't recall who said it first, possibly Eric himself) that Jeethu's approach was rejected because of that.

    My dim recollection was that Jeethu's approach wasn't explicitly rejected, more that the community was more "conflicted" than "strongly interested", so I lost interest, and nobody else followed up.

    I don't understand entirely why Jeethu's prototype had part written in C.

    My theory: it's easier to serialize C objects from C. It's maybe even slightly helpful? But it made building a pain. And yeah it just doesn't seem necessary. The code generator will be tied to the C representation no matter how you do it, so you might as well write it in a nice high-level language.

    I never ran it so I don't know what the generated code looked like, [...]

    You can see an example of Jeethu's serialized objects here:

    https://raw.githubusercontent.com/python/cpython/267c93d61db9292921229fafd895b5ff9740b759/Python/frozenmodules.c

    Yours is generally more readable because you're using the new named structure initializers syntax. Though Jeethu's code is using some symbolic constants (e.g. PyUnicode_1BYTE_KIND) where you're just printing the actual value.

    > With that in place, it'd be great to pre-cache all the .py files automatically read in at startup.

    *All* the .py files? I think the binary bloat cause by deep-freezing the entire stdlib would be excessive.

    I did say "all the .py files automatically read in at startup". In current trunk, there are 32 modules in sys.module at startup (when run non-interactively), and by my count 13 of those are written in Python.

    If we go with Eric's approach, that means we'd turn those .pyc files into static data. My quick experiment suggests that'd be less than 300k. On my 64-bit Linux system, a default build of current trunk (configure && make -j) yields a 23mb python executable, and a 44mb libpython3.11.a. If I build without -g, they are 4.3mb and 7mb respectively. So this speedup would add another 2.5% to the size of a stripped build.

    If even that 300k was a concern, the marshal approach would also permit us to compile all the deep-frozen modules into a separate shared library and unload it after we're done.

    I don't know what the runtime impact of "deep-freeze" is, but it seems like it'd be pretty minimal. You're essentially storing these objects in C static data instead of the heap, which should be about the same. Maybe it'd mean the code objects for the module bodies would stick around longer than they otherwise would? But that doesn't seem like it'd add that much overhead.

    It's interesting to think about applying these techniques to the entire standard library, but as you suggest that would probably be wasteful. On the other hand: if we made a viable tool that could consume some arbitrary set of .py files and produce a C file, and said C file could then be compiled into a shared library, end users could enjoy this speedup over the subset of the standard library their program used, and perhaps even their own source tree(s).

    nascheme commented 2 years ago

    [Larry]

    The one thing I guess I never mentioned is that building and working with the prototype was frightful; it had both Python code and C code, and it was fragile and hard to get working.

    I took Larry's PR and did a fair amount of cleanup on it to make the build less painful and fragile. My branch is fairly easy to re-build. The major downsides remaining are that you couldn't update .py files and have them used (static ones take priority) and the generated C code is quite large.

    I didn't make any attempt to work on the serializer, other than to make it work with an alpha version of Python 3.10.

    https://github.com/nascheme/cpython/tree/static_frozen

    It was good enough to pass nearly(?) all tests and I did some profiling. It helped reduce startup time quite a bit.

    malemburg commented 2 years ago

    On 28.08.2021 06:06, Guido van Rossum wrote:

    > With that in place, it'd be great to pre-cache all the .py files automatically read in at startup.

    *All* the .py files? I think the binary bloat cause by deep-freezing the entire stdlib would be excessive. In fact, Eric's approach freezes everything in the encodings package, which turns out to be a lot of files and a lot of code (lots of simple data tables expressed in code), and I found that for basic startup time, it's best not to deep-freeze the encodings module except for __init__.py, aliases.py and utf_8.py.

    Eric's approach, as I understand it, is pretty much what PyRun does. It freezes almost the entire stdlib. The main aim was to save space and create a Python runtime with very few files for easy installation and shipment of products written in Python.

    For Python 3.8 (I haven't ported it to more recent Python versions yet), the uncompressed stripped binary is 15MB. UPX compressed, it's only 5MB:

    -rwxr-xr-x 1 lemburg lemburg 15M May 19 15:26 pyrun3.8 -rwxr-xr-x 1 lemburg lemburg 32M Aug 26 2020 pyrun3.8-debug -rwxr-xr-x 1 lemburg lemburg 5.0M May 19 15:26 pyrun3.8-upx

    There's no bloat, since you don't need the .py/.pyc files for the stdlib anymore. In fact, you save quite a bit of disk space compared to a full Python installation and additionally benefit from the memory mapping the OS does for sharing access to the marshal'ed byte code between processes.

    That said, some things don't work with such an approach, e.g. a few packages include additional data files which they expect to find on disk. Since those are not available anymore, they fail.

    For PyRun I have patched some of those packages to include the data in form of Python modules instead, so that it gets frozen as well, e.g. the Python grammar files.

    Whether this is a good approach for Python in general is a different question, though. PyRun is created on top of the existing released Python distribution, so it doesn't optimize for being able to work with the frozen code. In fact, early versions did not even have a REPL, since the main point was to run a single released app.

    236914ab-5504-4492-add8-6907a1140f5c commented 2 years ago

    When I investigated freezing the standard library for PyOxidizer, I ran into a rash of problems. The frozen importer doesn't behave like PathFinder. It doesn't (didn't?) set some common module-level attributes that are documented by the importer "specification" to be set and this failed a handful of tests and lead to runtime issues or breakage in 3rd party packages (such as random packages looking for a __file__ on a common stdlib module).

    Also, when I last looked at the CPython source, the frozen importer performed a linear scan of its indexed C array performing strcmp() on each entry until it found what it was looking for. So adding hundreds of modules could result in sufficient overhead and justify using a more efficient lookup algorithm. (PyOxidizer uses Rust's HashMap to index modules by name.)

    I fully support more aggressive usage of frozen modules in the standard library to speed up interpreter startup. However, if you want to ship this as enabled by default, from my experience with PyOxidizer, I highly recommend:

    236914ab-5504-4492-add8-6907a1140f5c commented 2 years ago

    Oh, PyOxidizer also ran into more general issues with the frozen importer in that it broke various importlib APIs. e.g. because the frozen importer only supports bytecode, you can't use .__loader__.get_source() to obtain the source of a module. This makes tracebacks more opaque and breaks legitimate API consumers relying on these importlib interfaces.

    The fundamental limitations with the frozen importer are why I implemented my own meta path importer (implemented in pure Rust), which is more fully featured, like the PathFinder importer that most people rely on today. That importer is available on PyPI (https://pypi.org/project/oxidized-importer/) and has its own API to facilitate PyOxidizer-like functionality (https://pyoxidizer.readthedocs.io/en/stable/oxidized_importer.html) if anyone wants to experiment with it.

    gvanrossum commented 2 years ago

    Gregor, thanks for sharing your experience!

    I guess freezing the entire stdlib instead of just a smattering of modules (like we do here) exacerbated the problems in your case.

    Builtin modules (such as sys or time) don't have a __file__ attribute either, and nobody has ever complained about this (that I know of). I wonder how far our backwards compatibility guarantee should go -- would this mean we cannot turn any stdlib module written in Python into one written in C (or Rust :-)?

    It would be more serious if standard tests fail, but I haven't seen any evidence -- the tests all seem to pass for Eric's PR.

    Now, if there are stdlib modules that reference their own __file__ and we want to freeze those, we should switch those to using ResourceReader or importlib.resources of course.

    I agree that we should shore up the frozen importer -- probably in a separate PR though. (@Eric: do you think this is worth its own bpo issue?)

    I also noticed the linear scan (and it's being called up to 5 times for modules that *aren't frozen :-). strcmp is *very fast (isn't it a compiler intrinsic?), but perhaps we should time it and if it seems a problem we could sort the array and do a form of bisection. (A slight problem is that there's an API to mutate the list by changing a pointer, so we may have to detect when the pointer has changed and recompute the size by doing a scan for the sentinel, once.)

    Unfortunately I don't think we're yet in a world where we can accept any dependencies on Rust for CPython itself, so we would have to rewrite your example implementations in C if we wanted to use them.

    ericsnowcurrently commented 2 years ago

    On Fri, Aug 27, 2021 at 6:29 PM Guido van Rossum \report@bugs.python.org\ wrote:

    The plot thickens. By searching my extensive GMail archives for Jeethu Rao I found an email from Sept. 14 to python-dev by Larry Hastings titled "Store startup modules as C structures for 20%+ startup speed improvement?"

    Thanks for finding that, Guido!

    On Fri, Aug 27, 2021 at 6:37 PM Guido van Rossum \report@bugs.python.org\ wrote:

    Either way it's a suboptimal experience for people contributing to those modules. But we stand to gain a \~20% startup time improvement.

    Agreed, and I think a solution shouldn't be too hard to reach.

    On Fri, Aug 27, 2021 at 7:48 PM Larry Hastings \report@bugs.python.org\ wrote:

    In experimenting with the prototype, I observed that simply calling stat() to ensure the frozen .py file hadn't changed on disk lost us about half the performance win from this approach.

    Yeah, this is an approach others had suggested and I'd considered. We have other solutions available that don't have that penalty.

    On Fri, Aug 27, 2021 at 8:08 PM Larry Hastings \report@bugs.python.org\ wrote:

    There should be a boolean flag that enables/disables cached copies of .py files from Lib/. You should be able to turn it off with either an environment variable or a command-line option, and when it's off it skips all the internal cached stuff and uses the normal .py / .pyc machinery.

    With that in place, it'd be great to pre-cache all the .py files automatically read in at startup.

    Yeah, something along these lines should be good enough.

    [snip] But then I'm not sure this is a very good analogy--the workflow for making Clinic changes is very different from people hacking on Lib/*.py.

    Agreed.

    On Fri, Aug 27, 2021 at 10:06 PM Guido van Rossum \report@bugs.python.org\ wrote:

    [snip] FWIW in my attempts to time this, it looks like the perf benefits of Eric's approach are close to those of deep-freezing. And deep-freezing causes much more bloat of the source code and of the resulting binary.

    The question of freeze vs deep-freeze (i.e. is deep-freeze better enough) is one we can discuss separately, and your point here is probably the fundamental center of that discussion. However, I don't think it has a lot of bearing on the change proposed in this issue.

    [snip] I think the only solution here was hinted at in the python-dev thread from 2018: have a command-line flag to turn it on or off (e.g. -X deepfreeze=1/0) and have a policy for what the default for that flag should be (e.g. on by default in production builds, off by default in developer builds -- anything that doesn't use --enable-optimizations).

    Agreed.

    [snip] it wasn't so clear that code objects should be immutable -- that realization came later, when Greg Stein proposed making them ROM-able. That didn't work out, but the notion that code objects should be strictly mutable (to the python user, at least) was born

    This sounds like an interesting story. Do you have any mailing list links handy? (Otherwise I can search the archives.)

    In fact, Eric's approach freezes everything in the encodings package, which turns out to be a lot of files and a lot of code (lots of simple data tables expressed in code), and I found that for basic startup time, it's best not to deep-freeze the encodings module except for __init__.py, aliases.py and utf_8.py.

    Yeah, this is something to consider. FWIW, in my testing, dropping encodings.* from the list of frozen modules reduced the performance gains (from 20 ms to 21 ms).

    -eric

    ericsnowcurrently commented 2 years ago

    On Fri, Aug 27, 2021 at 11:14 PM Larry Hastings \report@bugs.python.org\ wrote:

    [snip] On the other hand: if we made a viable tool that could consume some arbitrary set of .py files and produce a C file, and said C file could then be compiled into a shared library, end users could enjoy this speedup over the subset of the standard library their program used, and perhaps even their own source tree(s).

    Yeah, that would be interesting to investigate.

    On Sat, Aug 28, 2021 at 5:17 AM Marc-Andre Lemburg \report@bugs.python.org\ wrote:

    Eric's approach, as I understand it, is pretty much what PyRun does. [further details]

    It's reassuring to hear that the approach is known to be viable. :)

    In fact, you save quite a bit of disk space compared to a full Python installation and additionally benefit from the memory mapping the OS does for sharing access to the marshal'ed byte code between processes.

    That's a good point.

    That said, some things don't work with such an approach, e.g. a few packages include additional data files which they expect to find on disk. Since those are not available anymore, they fail.

    For PyRun I have patched some of those packages to include the data in form of Python modules instead, so that it gets frozen as well, e.g. the Python grammar files.

    For stdlib modules it wouldn't be a big problem to set __file__ on frozen modules. Would that be enough to solve the problem?

    On Sat, Aug 28, 2021 at 5:41 PM Gregory Szorc \report@bugs.python.org\ wrote:

    When I investigated freezing the standard library for PyOxidizer, I ran into a rash of problems. The frozen importer doesn't behave like PathFinder. It doesn't (didn't?) set some common module-level attributes

    This is mostly fixable for stdlib modules. Which attributes would need to be added? Are there other missing behaviors?

    Also, when I last looked at the CPython source, the frozen importer performed a linear scan of its indexed C array performing strcmp() on each entry until it found what it was looking for. So adding hundreds of modules could result in sufficient overhead and justify using a more efficient lookup algorithm. (PyOxidizer uses Rust's HashMap to index modules by name.)

    Yeah, we noticed this too. I wasn't sure it was something to worry about at first because we're not freezing the entire stdlib. We're freezing on the order of 10, plus all the (80+) encoding modules. I figured we could look at an alternative to that linear search afterward if it made sense.

    • Make sure you run unit tests against the frozen modules. If you don't do this, subtle differences in how the different importers behave will lead to problems.

    We'll do what we already do with importlib: run the tests against both the frozen and the source modules. Thanks for the reminder to do this though!

    On Sat, Aug 28, 2021 at 5:53 PM Gregory Szorc \report@bugs.python.org\ wrote:

    Oh, PyOxidizer also ran into more general issues with the frozen importer in that it broke various importlib APIs. e.g. because the frozen importer only supports bytecode, you can't use .__loader__.get_source() to obtain the source of a module. This makes tracebacks more opaque and breaks legitimate API consumers relying on these importlib interfaces.

    Good point. Supporting more of the FileLoader API on the frozen loader is something to look into, at least for stdlib modules.

    The fundamental limitations with the frozen importer are why I implemented my own meta path importer (implemented in pure Rust), which is more fully featured, like the PathFinder importer that most people rely on today. That importer is available on PyPI (https://pypi.org/project/oxidized-importer/) and has its own API to facilitate PyOxidizer-like functionality (https://pyoxidizer.readthedocs.io/en/stable/oxidized_importer.html) if anyone wants to experiment with it.

    Awesome! I'll take a look.

    On Sat, Aug 28, 2021 at 6:14 PM Guido van Rossum \report@bugs.python.org\ wrote:

    I agree that we should shore up the frozen importer -- probably in a separate PR though. (@Eric: do you think this is worth its own bpo issue?)

    Yeah.

    -eric

    ericsnowcurrently commented 2 years ago

    At this point, here are the open questions I'm seeing:

    + The editing-stdlib-.py-files problem:

    + Compatibility:

    + Penalty for too many frozen modules:

    FWIW, I think the ideal mechanism for a dev build will be to opt in to using frozen modules (instead of the source modules). Otherwise it is too easy for the unaware contributor to waste substantial time figuring out why their changes are not getting used.

    Consequently, here's my order of preference for ignoring frozen modules:

    1. use Py_DEBUG as an opt-out flag (if we think contributors are editing stdlib modules on a build without Py_DEBUG then that isn't good enough)
    2. automatically skip frozen modules if it's a dev build, with an explicit configure flag to opt in to frozen modules
    ericsnowcurrently commented 2 years ago
    • tricks to inject hooks ASAP (e.g. coverage.py swaps the encodings module) may lose their entry point

    FWIW, I asked Ned Batchelder about this and he said this approach ("fullcoverage" [1]) was added to support running coverage on the stdlib. It doesn't affect other users of coverage.py. He didn't have any info on where this is used currently, though I'm pretty sure we do run coverage in CI. Furthermore, the devguide talks about running coverage.py on the stdlib and initially indicates that modules imported during startup are not covered. [2] However, it does have a section talking about using "fullcoverage" to cover startup modules.

    I expect that any solution we make for contributors editing stdlib source files will resolve the potential issue for coverage.py.

    [1] https://github.com/nedbat/coveragepy/tree/master/coverage/fullcoverage [2] https://devguide.python.org/coverage/?highlight=coverage#measuring-coverage [3] https://devguide.python.org/coverage/?highlight=coverage#coverage-results-for-modules-imported-early-on

    236914ab-5504-4492-add8-6907a1140f5c commented 2 years ago

    For stdlib modules it wouldn't be a big problem to set __file__ on frozen modules. Would that be enough to solve the problem?

    What do you set __file to? Do you still materialize the .py[c] files on disk and set to that? (This would make the most sense.) If you support setting __file on a frozen module, how does this work at the C level? A stdlib module would want a different __file__ from a 3rd party module using the frozen module API. This would seemingly require enhancements to the frozen modules C API or some kind of hackery only available to stdlib frozen modules.

    > When I investigated freezing the standard library for PyOxidizer, I ran into a rash > of problems. The frozen importer doesn't behave like PathFinder. It doesn't > (didn't?) set some common module-level attributes This is mostly fixable for stdlib modules. Which attributes would need to be added? Are there other missing behaviors?

    It was a few years ago and I can't recall specifics. I just remember that after encountering a couple unrelated limitations with the frozen importer I decided it was far too primitive to work as a general importer and decided I'd have to roll my own.

    Good point. Supporting more of the FileLoader API on the frozen loader is something to look into, at least for stdlib modules.

    > I agree that we should shore up the frozen importer -- probably in a separate PR though. > (@Eric: do you think this is worth its own bpo issue?) Yeah.

    I have some observations about the implications of this. I typed up a long comment but then realized someone would probably complain about me distracting from the technical parts of this issue. Which forum is most appropriate for this less technical commentary? (I want to lay out the case for building out an official importer far more advanced than frozen importer.)

    gvanrossum commented 2 years ago

    FWIW, I asked Ned Batchelder about this and he said this approach ("fullcoverage" [1]) was added to support running coverage on the stdlib. [...]

    The docs you pointed out in [3] (where it talks about a *horrible hack you should never use" :-) should be amended with something explaining that "you need to comment out the line "\<encodings.*>" from frozen_module.py for this to work, else the frozen version of the encodings module will take priority over the imposter from "fullcoverage"."

    gvanrossum commented 2 years ago

    [Gregory Szorc]

    What do you set __file__ to? [...]

    Exactly. I think it should not be set, just like it's not set for builtin modules.

    I have some observations about the implications of this. I typed up a long comment but then realized someone would probably complain about me distracting from the technical parts of this issue. Which forum is most appropriate for this less technical commentary? (I want to lay out the case for building out an official importer far more advanced than frozen importer.)

    That seems to be something for python-dev. (There used to be an "import-sig" but the last mention I have from it is about that list shutting down for lack of traffic.) Or if you're still looking for more brainstorming you could try python-ideas first.

    gvanrossum commented 2 years ago

    At this point, here are the open questions I'm seeing:

    • The editing-stdlib-.py-files problem: [...]

    • Compatibility: [...]

    ? + Penalty for too many frozen modules: [...]

    FWIW, I think the ideal mechanism for a dev build will be to opt in to using frozen modules (instead of the source modules). Otherwise it is too easy for the unaware contributor to waste substantial time figuring out why their changes are not getting used.

    Agreed. I don't care much about people (even myself) editing installed modules. But I care a lot about people who do a git checkout and build from source editing modules and testing them without doing an install (Personally I never install what I build from source except to test the installation process.)

    Consequently, here's my order of preference for ignoring frozen modules:

    1. use Py_DEBUG as an opt-out flag (if we think contributors are editing stdlib modules on a build without Py_DEBUG then that isn't good enough)
    2. automatically skip frozen modules if it's a dev build, with an explicit configure flag to opt in to frozen modules.

    I propose to only opt in by default in **PGO builds**. After all what we're doing is another extreme optimization.

    It should always be possible to opt in using some -X flag (e.g. to debug the freeze import loader) and it should also always be possible to opt *out* (for those cases where you want to edit an installed stdlib module in-place in anger).

    I don't know how the -X mechanism works exactly but probably we could make the spelling

    python -X freeze=on|off

    with a dynamic default.

    gvanrossum commented 2 years ago

    FWIW, I'd be okay with doing the -X flag in a separate PR.

    ericsnowcurrently commented 2 years ago

    On Mon, Aug 30, 2021 at 2:22 PM Guido van Rossum \report@bugs.python.org\ wrote:

    I propose to only opt in by default in **PGO builds**. After all what we're doing is another extreme optimization.

    It should always be possible to opt in using some -X flag (e.g. to debug the freeze import loader) and it should also always be possible to opt *out* (for those cases where you want to edit an installed stdlib module in-place in anger).

    I don't know how the -X mechanism works exactly but probably we could make the spelling

    python -X freeze=on|off

    with a dynamic default.

    +1 to all that

    brettcannon commented 2 years ago

    set __file (and __path) on frozen modules?

    See https://bugs.python.org/issue21736

    malemburg commented 2 years ago

    On 31.08.2021 20:14, Brett Cannon wrote:

    Brett Cannon \brett@python.org\ added the comment:

    > set __file (and __path) on frozen modules?

    See https://bugs.python.org/issue21736

    The patch on that ticket is straight from PyRun, where the __file__ location is set in a way which signals that the file does not exist, but instead is baked into the executable:

    >>> import os
    >>> os.__file__
    '<pyrun>/os.py'

    Not doing this breaks too many tests in the test suite for no good reason, which is why I mentioned "practicality beats purity" in the ticket.

    methane commented 2 years ago

    I don't want all frozen header files to be committed in git repository. Can't we just build them during build process?

    ericsnowcurrently commented 2 years ago

    On Tue, Aug 31, 2021 at 10:05 PM Inada Naoki \report@bugs.python.org\ wrote:

    I don't want all frozen header files to be committed in git repository. Can't we just build them during build process?

    That's a good point (and an interesting one). Only two of the frozen modules are necessary (_frozen_importlib_external and _frozen_importlib_external, to bootstrap the import system). So for those two it makes sense to have them in the git repository.

    For all the rest it isn't necessary. The only advantage is that contributors don't have to think about them and they will be guaranteed to be there. However, if someone clones the repo they have to build Python, so the frozen modules will get created anyway at that point.

    So I'm fine with not committing all those modules. This will require that all those files be added to the .gitignore file. (I'll update my PR accordingly.)

    -eric

    ericsnowcurrently commented 2 years ago

    On Tue, Aug 31, 2021 at 12:14 PM Brett Cannon \report@bugs.python.org\ wrote:

    > set __file (and __path) on frozen modules?

    See https://bugs.python.org/issue21736

    Great! I'll take a look.

    ncoghlan commented 2 years ago

    For the module metadata problem: one potential approach to that for "designed to be frozen" stdlib modules is to set the values directly in the module code, rather than trying to set them automatically in the frozen import machinery.

    It should also be possible to delete the implicitly created metadata fields and use module level dynamic attribute retrieval to find the stdlib source code for tracebacks and introspection purposes without incurring any start up costs.

    malemburg commented 2 years ago

    FWIW, I've not found the importer for frozen modules to be lacking features. When using frozen modules, you don't expect to see source code, so the whole part about finding source code is not really relevant for that use case.

    The only lacking part I found regarding frozen modules is support for these in pkgutil.py. But that's easy to add:

    --- /home/lemburg/egenix/projects/PyRun/Python-3.8.0/Lib/pkgutil.py 2019-10-14 1> +++ ./Lib/pkgutil.py 2019-11-17 11:36:38.404752218 +0100 @@ -315,20 +315,27 @@ return self.etc[2]==imp.PKG_DIRECTORY

         def get_code(self, fullname=None):
    +        # eGenix PyRun needs pkgutil to also work for frozen modules,
    +        # since pkgutil is used by the runpy module, which is needed
    +        # to implement the -m command line switch.
    +        if self.code is not None:
    +            return self.code
             fullname = self._fix_name(fullname)
    -        if self.code is None:
    -            mod_type = self.etc[2]
    -            if mod_type==imp.PY_SOURCE:
    -                source = self.get_source(fullname)
    -                self.code = compile(source, self.filename, 'exec')
    -            elif mod_type==imp.PY_COMPILED:
    -                self._reopen()
    -                try:
    -                    self.code = read_code(self.file)
    -                finally:
    -                    self.file.close()
    -            elif mod_type==imp.PKG_DIRECTORY:
    -                self.code = self._get_delegate().get_code()
    +        mod_type = self.etc[2]
    +        if mod_type == imp.PY_FROZEN:
    +            self.code = imp.get_frozen_object(fullname)
    +            return self.code
    +        elif mod_type==imp.PY_SOURCE:
    +            source = self.get_source(fullname)
    +            self.code = compile(source, self.filename, 'exec')
    +        elif mod_type==imp.PY_COMPILED:
    +            self._reopen()
    +            try:
    +                self.code = read_code(self.file)
    +            finally:
    +                self.file.close()
    +        elif mod_type==imp.PKG_DIRECTORY:
    +            self.code = self._get_delegate().get_code()
             return self.code
         def get_source(self, fullname=None):
    gvanrossum commented 2 years ago

    If you reduce the number of modules being frozen you could probably manage to land this (or most of it) before tackling those other issues.

    ericsnowcurrently commented 2 years ago

    On Mon, Sep 13, 2021 at 2:59 PM Guido van Rossum \report@bugs.python.org\ wrote:

    If you reduce the number of modules being frozen you could probably manage to land this (or most of it) before tackling those other issues.

    Yeah, that's what I'm doing. :)

    ericsnowcurrently commented 2 years ago

    New changeset a65c86889e208dddb26a7ebe7840c24edbcca775 by Eric Snow in branch 'main': bpo-45020: Add -X frozen_modules=[on|off] to explicitly control use of frozen modules. (gh-28320) https://github.com/python/cpython/commit/a65c86889e208dddb26a7ebe7840c24edbcca775

    terryjreedy commented 2 years ago

    New changeset 369bf949ccbb689cd4638b29b4c0c12db79b927c by Terry Jan Reedy in branch 'main': bpo-45020: Don't test IDLE with frozen module. (GH-28344) https://github.com/python/cpython/commit/369bf949ccbb689cd4638b29b4c0c12db79b927c

    miss-islington commented 2 years ago

    New changeset 8a9396cf1d9e1ce558841095e1ce0d3c23b7a8aa by Miss Islington (bot) in branch '3.10': bpo-45020: Don't test IDLE with frozen module. (GH-28344) https://github.com/python/cpython/commit/8a9396cf1d9e1ce558841095e1ce0d3c23b7a8aa

    miss-islington commented 2 years ago

    New changeset f71b86e0ae194613d235086755c6a44266978be1 by Miss Islington (bot) in branch '3.9': bpo-45020: Don't test IDLE with frozen module. (GH-28344) https://github.com/python/cpython/commit/f71b86e0ae194613d235086755c6a44266978be1

    ericsnowcurrently commented 2 years ago

    New changeset cbeb81971057d6c382f45ecce92df2b204d4106a by Eric Snow in branch 'main': bpo-45020: Freeze some of the modules imported during startup. (gh-28335) https://github.com/python/cpython/commit/cbeb81971057d6c382f45ecce92df2b204d4106a

    ericsnowcurrently commented 2 years ago

    At this point the fundamental work is done. Here are some follow-up tasks to wrap up this issue:

    Other related follow-up tasks:

    gvanrossum commented 2 years ago

    I would move "default to "on" (except if actually running out of the source tree)" to the "maybe" category. I left a few comments in other deps. I think we should start by turning this on by default in PGO builds.

    Separately, I encourage you to collect reliable performance numbers. It would be nice to see a dip on speed.python.org for this benchmark:

    https://speed.python.org/timeline/#/?exe=12&ben=python_startup&env=1&revs=50&equid=off&quarts=on&extr=on

    (but that won't show up until we turn this on by default for PGO builds).

    ericsnowcurrently commented 2 years ago

    On Wed, Sep 15, 2021 at 12:03 PM Guido van Rossum \report@bugs.python.org\ wrote:

    I would move "default to "on" (except if actually running out of the source tree)" to the "maybe" category. I left a few comments in other deps. I think we should start by turning this on by default in PGO builds.

    Sounds good.

    Separately, I encourage you to collect reliable performance numbers. It would be nice to see a dip on speed.python.org for this benchmark:

    I'll do that.