Open Avasam opened 5 months ago
Using your script, I see less of a difference on Linux with CPython:
3.12.3 (main, Apr 10 2024, 05:33:47) [GCC 13.2.0]
test_list_comprehension 3.4234498779999853
test_tuple_from_list_comprehension 3.7473174160000156
test_tuple_from_generator_comprehension 4.684340659999975
test_unpack_generator_comprehension 4.938177772000017
3.10.12 (main, Jun 11 2023, 05:26:28) [GCC 11.4.0]
test_list_comprehension 5.881427110000001
test_tuple_from_list_comprehension 6.184221321999999
test_tuple_from_generator_comprehension 6.949574859000002
test_unpack_generator_comprehension 7.213964431000001
3.8.10 (default, May 26 2023, 14:05:08)
[GCC 9.4.0]
test_list_comprehension 5.159386299999994
test_tuple_from_list_comprehension 5.68906659999999
test_tuple_from_generator_comprehension 6.2835374
test_unpack_generator_comprehension 6.521585700000003
3.6.9 (default, Dec 8 2021, 21:08:43)
[GCC 8.4.0]
test_list_comprehension 5.325352799999997
test_tuple_from_list_comprehension 5.670514699999998
test_tuple_from_generator_comprehension 6.860152300000003
test_unpack_generator_comprehension 7.0944126999999995
2.7.17 (default, Feb 27 2021, 15:10:58)
[GCC 7.5.0]
('test_list_comprehension', 5.888335943222046)
('test_tuple_from_list_comprehension', 6.135804891586304)
('test_tuple_from_generator_comprehension', 6.965441942214966)
But much more of a difference with PyPy:
3.9.18 (7.3.15+dfsg-1build3, Apr 01 2024, 03:12:48)
[PyPy 7.3.15 with GCC 13.2.0]
test_list_comprehension 0.2822986920000403
test_tuple_from_list_comprehension 0.40187594900010026
test_tuple_from_generator_comprehension 0.9802658359999441
test_unpack_generator_comprehension 1.0730282659999375
I think tuple-from-list comprehension approach will lead to to 2x higher peak memory consumption, won't it?
I was recently working on some bits of codes where most of my data had to be "readonly" (so I'm using immutable types like frozen dataclasses, frozensets, tuples, etc.) but also using plenty of comprehensions. Which made me wonder, since there's no "tuple comprehension" in Python, how I should be writing this code. I did a bit of performance testing, and here's the results:
Unsurprisingly, the difference is even greater in 3.12 with inline list comprehension.
Because of the
tuple
, the generator is immediately iterated, so you get no benefit from its "lazyness". This is probably true for other stdlib collections that don't have a comprehension syntax, tuple is just the only one I can think of atm.For this reason, I'm asking for a performance rule with an autofix that transforms code like this:
into
Which, unless I'm missing something, is free performance whilst staying readable and pythonic.
It seems this would fit well in the
flake8-comprehensions
orrefurb
family of rules.