python / cpython

The Python programming language
https://www.python.org/
Other
61.33k stars 29.56k forks source link

sys.settrace dramatic slowdown in 3.12 #107674

Open nedbat opened 11 months ago

nedbat commented 11 months ago

Bug report

Checklist

A clear and concise description of the bug

A bug report in coverage (https://github.com/nedbat/coveragepy/issues/1665) is reduced here to a pure do-nothing trace function for sys.settrace.

Reproduction:

% git clone https://github.com/nedbat/ndindex

% cd ndindex

% python3.11 -VV
Python 3.11.4 (main, Jun  7 2023, 08:42:37) [Clang 14.0.3 (clang-1403.0.22.14.1)]

% python3.11 -m venv v311

% ./v311/bin/pip install numpy

% python3.12 -VV
Python 3.12.0b4 (main, Jul 11 2023, 20:38:26) [Clang 14.0.3 (clang-1403.0.22.14.1)]

% python3.12 -m venv v312

% ./v312/bin/pip install --pre --extra-index https://pypi.anaconda.org/scientific-python-nightly-wheels/simple numpy

% # Run the code without the trace function.
% for x in 1 2 3; time ./v311/bin/python -m ndindex.tests.justdoit notrace
./v311/bin/python -m ndindex.tests.justdoit notrace  7.05s user 0.37s system 108% cpu 6.813 total
./v311/bin/python -m ndindex.tests.justdoit notrace  7.04s user 0.34s system 109% cpu 6.724 total
./v311/bin/python -m ndindex.tests.justdoit notrace  7.00s user 0.35s system 109% cpu 6.696 total

% # 3.12 is slightly faster without the trace function.
% for x in 1 2 3; time ./v312/bin/python -m ndindex.tests.justdoit notrace
./v312/bin/python -m ndindex.tests.justdoit notrace  6.56s user 0.30s system 106% cpu 6.422 total
./v312/bin/python -m ndindex.tests.justdoit notrace  6.48s user 0.31s system 106% cpu 6.359 total
./v312/bin/python -m ndindex.tests.justdoit notrace  6.38s user 0.28s system 107% cpu 6.217 total

% # 3.11 with tracing is about 3x slower.
% for x in 1 2 3; time ./v311/bin/python -m ndindex.tests.justdoit trace
./v311/bin/python -m ndindex.tests.justdoit trace  22.64s user 0.51s system 101% cpu 22.772 total
./v311/bin/python -m ndindex.tests.justdoit trace  21.92s user 0.46s system 101% cpu 21.979 total
./v311/bin/python -m ndindex.tests.justdoit trace  21.55s user 0.35s system 102% cpu 21.379 total

% # 3.12 with tracing is 7x slower.
% for x in 1 2 3; time ./v312/bin/python -m ndindex.tests.justdoit trace
./v312/bin/python -m ndindex.tests.justdoit trace  49.47s user 0.40s system 100% cpu 49.676 total
./v312/bin/python -m ndindex.tests.justdoit trace  49.53s user 0.39s system 100% cpu 49.784 total
./v312/bin/python -m ndindex.tests.justdoit trace  50.44s user 0.38s system 100% cpu 50.739 total

I don't know if the unusual numpy build has something to do with this.

Linked PRs

neonene commented 11 months ago

This issue can be seen starting from the commit 411b1692811b2ecac59cb0df0f920861c7cf179a with usual numpy-1.25.2.

AlexWaygood commented 11 months ago

cc. @markshannon

nedbat commented 9 months ago

@markshannon Is there any news about this?

nedbat commented 8 months ago

@iritkatriel perhaps you have some idea?

iritkatriel commented 8 months ago

@iritkatriel perhaps you have some idea?

Did https://github.com/python/cpython/pull/107780 help?

gaogaotiantian commented 5 months ago

Hi @nedbat , I circled back to this issue. If you are still interested, could you try the patch of #114986 and see if the result is improved?

nedbat commented 5 months ago

@gaogaotiantian Thanks. I tried #114986, and it didn't run faster. I used the https://github.com/Quansight-Labs/ndindex repo test suite: 3.11: 160s 3.12: 255s 3.13 main: 248s pr114986: 245s

I wish I had better news.

gaogaotiantian commented 5 months ago

Oh, did you test it with an empty trace function?

nedbat commented 5 months ago

I will do more tests, but the ultimate goal is to perform better in real-world scenarios. Here is more data from a subset of the test suite (-k test_chunking), which looks much more promising. It's hard to get consistent numbers, but this looks good. I would merge this change.

Python version no tracing Python trace C trace
3.11.8 1.19s 7.39s 2.46s
3.12.2 1.12s 11.57s 3.71s
3.13.0a3+ heads/main:a95b1a56bb 1.14s 10.37s 3.89s
3.13.0a3+ heads/pr/114986:cb09c55758 1.14s 6.48s 2.91s
gaogaotiantian commented 5 months ago

Thank you for the test. Yes the PR only improves the performance of the actual trace function - which is normally the major cost for tracing tools. Good to know it helps with the more real-life benchmark. Unfortunately, due to the change of the tracing mechanism, the overhead for sys.settrace will always be larger than the old way - I did not see a quick solution to that.

markshannon commented 4 months ago

Now that the eval_breaker is part of the thread state, here's an alternative approach that might work.

tstate->tracing only needs one bit, so use the low bit in the version part of the eval breaker. For non-tracing functions we expect to always see the low bit set to 0. For tracing functions we expect to always see the low bit set to 1.

If the code and global versions differ do the following:

I think this is correct, but I've not drawn up a full state diagram.

encukou commented 4 months ago

114986 caused refleaks tests (./python -m test -R 3:3 test_frame) to fail.

If not fixed in 24 hours, I'll revert the commit to unblock buildbots. (I hope I'll get have time to investigate & come up with a proper fix, but can't promise that.)

gaogaotiantian commented 4 months ago

@encukou See response in #116098

rtb-zla-karma commented 3 months ago

Hi.

I wanted to post this in https://github.com/nedbat/coveragepy/issues/1665 but it seems that investigation moved here. I hope my code example will help to investigate and not clutter the Issue.

I also have been chasing "tests run 2 times slower" dragon. I used coverage through pytest and beside some test cases running slower in general, I've had a consistent +43 seconds "freeze" in collecting phase for specific import. I will paste some log excerpts and then minimal code to reproduce some of them. Look at the log times in brackets to find problematic steps.

pytest_debug.txt - this comes from example code

[00:00:00.265179]       found cached rewritten pyc for /path/test_jieba_2/test/test_fut.py [assertion]
[00:00:00.265185]       early skip of rewriting module: fut [assertion]
[00:00:00.265190]       early skip of rewriting module: jieba [assertion]
[00:00:00.265195]       early skip of rewriting module: jieba.finalseg [assertion]
[00:00:00.265200]       early skip of rewriting module: jieba._compat [assertion]
[00:00:44.966856] ============================= test session starts ==============================
[00:00:44.966888] platform linux -- Python 3.12.2, pytest-8.1.1, pluggy-1.4.0 -- /venv/path/bin/python
[00:00:44.966893] using: pytest-8.1.1
[00:00:44.966896] setuptools registered plugins:
[00:00:44.966899]   pytest-cov-4.1.0 at /venv/path/lib/python3.12/site-packages/pytest_cov/plugin.py
[00:00:44.966902] rootdir: /path/test_jieba_2
[00:00:44.966905] plugins: cov-4.1.0
[00:00:44.966908] collected 1 item
[00:00:44.966910] 
[00:00:44.978316] test/test_fut.py .                                                       [100%]      early skip of rewriting module: pkg_resources [assertion]
[00:00:44.978354]       early skip of rewriting module: jieba.finalseg.prob_start [assertion]
[00:00:44.978360]       early skip of rewriting module: jieba.finalseg.prob_trans [assertion]
[00:00:44.978363]       early skip of rewriting module: jieba.finalseg.prob_emit [assertion]
[00:00:44.978366]         pytest_pycollect_makeitem [hook]
[00:00:44.978369]             collector: <Module test_fut.py>
[00:00:44.978372]             name: @py_builtins
[00:00:44.978375]             obj: <module 'builtins' (built-in)>
[00:00:44.978378]         finish pytest_pycollect_makeitem --> None [hook]

early skip of rewriting module: jieba... lines overlap with test session starts in a weird way in this minimal example but in full project I'm working on it froze at early skip of rewriting module: jieba.finalseg.prob_emit [assertion] consistently.

This one is from the project. Let's call it python_debug.txt (command to make it is at the end). I'm pasting that because in minimal example it freezes in different place but actually jieba import is the culprit, specificly jieba_pyfast.finalseg.prob_emit module.

[00:00:09.073173] # possible namespace for /venv/path/lib/python3.12/site-packages/google
[00:00:09.073279] # possible namespace for /venv/path/lib/python3.12/site-packages/google
[00:00:09.099473] import 'pkg_resources' # <_frozen_importlib_external.SourceFileLoader object at 0x7fa3cf936d20>
[00:00:09.099509] import 'jieba_pyfast._compat' # <_frozen_importlib_external.SourceFileLoader object at 0x7fa3cf936960>
[00:00:09.100432] # /venv/path/lib/python3.12/site-packages/jieba_pyfast/finalseg/__pycache__/prob_emit.cpython-312.pyc matches /venv/path/lib/python3.12/site-packages/jieba_pyfast/finalseg/prob_emit.py
[00:00:09.103991] # code object from '/venv/path/lib/python3.12/site-packages/jieba_pyfast/finalseg/__pycache__/prob_emit.cpython-312.pyc'
[00:00:58.043337] import 'jieba_pyfast.finalseg.prob_emit' # <_frozen_importlib_external.SourceFileLoader object at 0x7fa3cf883170>
[00:00:58.043791] # /venv/path/lib/python3.12/site-packages/jieba_pyfast/finalseg/__pycache__/prob_start.cpython-312.pyc matches /venv/path/lib/python3.12/site-packages/jieba_pyfast/finalseg/prob_start.py

Files and commands to reproduce

tree .

.
├── fut.py
├── requirements.txt
└── test
    ├── __init__.py
    └── test_fut.py
# fut.py
import jieba

def a():
    pass
#
# This file is autogenerated by pip-compile with Python 3.12
# by the following command:
#
#    pip-compile --allow-unsafe --output-file=requirements.txt requirements.in
#
coverage[toml]==7.4.1
    # via pytest-cov
iniconfig==2.0.0
    # via pytest
jieba==0.42.1
    # via -r requirements.in
packaging==23.2
    # via pytest
pluggy==1.4.0
    # via pytest
pytest==8.1.1
    # via
    #   -r requirements.in
    #   pytest-cov
pytest-cov==4.1.0
    # via -r requirements.in
# test/test_fut.py
import fut

def test_stuff():
    assert True

System info:

When run trough pytest tests take less than a second. With pytest -cov=. they take ~ 44 seconds. Removing import jieba makes tests run fast again. To get output lines above I run (ts is from moreutils Ubuntu package):

gaogaotiantian commented 3 months ago

Ah, this is interesting. The problem is that the file has a long const dict that expands into 30k+ lines. Each line will execute to produce part of the dict, and trigger a callback into the trace function, which will slow down the program significantly.

This is not a bug, and almost expected - it's just the unfortunate code that makes it weird.

matthew-mcallister commented 3 months ago

I believe this performance regression is causing debugging to become practically unusable in 3.12+ when using certain dependencies. This repro takes two and a half minutes to run on a Macbook pro, due to the geocoder module importing many large dictionaries.

# Run with python -m pdb repro.py
# Type n to execute the import statement with an active breakpoint
from phonenumbers import geocoder, parse
print(geocoder.description_for_number(parse('+13215555555'), 'en'))

Perhaps the library can be updated to use the fast C-based JSON scanner for performance, but that's a significant change to make to work around a language performance regression.

gaogaotiantian commented 3 months ago

Just to confirm, what command did you use after python -m pdb repro.py? pdb should be brought up before importing and if you use c it should not add any overhead.

matthew-mcallister commented 3 months ago

Good point. I typed n to step one line, which sets a breakpoint.

gaogaotiantian commented 3 months ago

Yes, n will trigger this. That's a known issue in pdb (step and breakpoint is costly) and some changes in 3.12 made it worse, especially for python file with a huge chunk of code in a single namespace (it's fine if it spreads in multiple functions). I am investigating any solution that can make this better, but I don't think we can backport any optimizations in 3.12.

matthew-mcallister commented 3 months ago

Performance improvements in any future version would be great. That said, it is conceivable to me that generating very large Python files will become a library anti-pattern due to this.

gaogaotiantian commented 3 months ago

I came up with a patch to fix the long dict issue. I'm not sure if you are interested in trying the patch in #118127. If you are, let me know the result.

gaogaotiantian commented 1 month ago

Hi @nedbat , do you have a chance to test the performance of coverage on 3.13b against 3.12? I made a few optimizations and I'd hope the performance issue is relieved a bit.

nedbat commented 1 month ago

I'm having a hard time re-creating the scenarios I used before :(

gaogaotiantian commented 1 month ago

That's okay. Does coveragepy have nightly validations for 3.13b? Is there a basic timing benchmark that we can refer to? Just a ballpark would be helpful.

asmeurer commented 1 month ago

You can try running the ndindex tests, which was the original thing I noticed this issue with in https://github.com/nedbat/coveragepy/issues/1665. Just clone

https://github.com/Quansight-Labs/ndindex

then install the packages in requirements-dev.txt and run

time pytest

(coverage is enabled automatically). On my computer with Python 3.11.9, the tests take 2:48 and with Python 3.12.0 they take 4:21 (both with coverage 7.5.2). If you want to run the tests with coverage disabled you should remove the coverage options from pytest.ini. Note that there is a degree of variation in the test times due to the use of hypothesis. You might want to set the --hypothesis-seed to avoid this.

nedbat commented 1 month ago

Coverage.py has an overnight job that runs its test suite on nightly builds of Python, including 3.13 and 3.14 now: https://github.com/nedbat/coveragepy/actions/runs/9265205433 There isn't a timing test in there though.

gaogaotiantian commented 1 month ago

Okay thanks @nedbat and @asmeurer . I was hoping for some quick and existing benchmark but it's okay that we don't have any for now. Just wondering if we should close this issue now that we have implemented some optimizations.

nedbat commented 1 month ago

I have a benchmarking framework, but it's fragile. I will try to patch it up later this week.