dosisod / refurb

A tool for refurbishing and modernizing Python codebases
GNU General Public License v3.0
2.48k stars 54 forks source link

Enhancement: Performance Rules #28

Open Goldziher opened 2 years ago

Goldziher commented 2 years ago

Hi again @dosisod,

One thing I'd like to suggest is adding performance checks.

  1. Certain kinds of loops will benefit from being chained using itertools.chain, and in general itertools offers better performance in many cases.
  2. Other things are using the builtins, filter, map and reduce, which can be significantly faster than simple python code because of pythons native optimizations.
  3. Usage of sets for lookup (in checks, is dramatically faster than list/tuple lookup).
dosisod commented 2 years ago

I think that we could do something like this. There is a tool you might find interesting called perflint, which does (somewhat similar) optimization checks, but none of the specific ones that you mention from what I can tell.

Some questions that I have:

  1. Would this be a valid use case for Refurb to pick up on?
for item in (*list1, *list2, *list3):
    # do something

# vs

for item in itertools.chain(list1, list2, list3):
    # do something

And, if for whatever reason your code depends on some side-effect during the exhausting of the iterators (first example), lazily iterating (second example) could be a bug. This is unlikely, but something we would have to mention in the explainer for that check.

  1. As far as I can tell, there is a lot of debate on whether to use filter or a normal list comprehension. For example:
even_nums = filter(lambda x: x % 2 == 0, nums)

# vs

even_nums = [x for x in nums if x % 2 == 0]

I have yet to speed test filter vs list comprehensions, but I know that list comprehensions are much faster then the naive python solution:

even_nums = []

for x in nums:
    if x % 2 == 0:
        even_nums.append(x)
  1. What would the threshold be for an in operation to use a set? I feel like for 2 or 3 elements, using a set wouldn't be that much of an improvement, but for 100 it might be worth it. You probably won't be able to get the size of the iterable unless it is a literal such as (1, 2, 3).

I also planned on writing a check which would look for code similar to the all, any, and min/max functions, like so:

def all_truthy(it):
    for i in it:
        if not i:
            return False

    return True

The above code could be replaced with all. Is something like that what you would be looking for?

Goldziher commented 2 years ago

So lemme go point by point-

  1. For itertools - I don't think we should consider side effects. I'm terms of sheer performance this is a clear winner. It's also better code usually. I would consider chain, chain.from_iterable and groupby as the best candidates, followed by the other members if the library.

  2. Python builtins have a bad name because of the rather ugly implementation of lambda. Still, they offer a significant performance boost in some situations. Checkout for example how reduce is used in starlite.parsers. It offered 30%+ improvement over a more standard pythonic solution.

  3. List comprehensions are indeed way more performant usually.

  4. Sets do offer a vast improvement even in small collections. When the function at hand is called millions of times, it does make a difference. That's why when you check duct keys using in, python actually tests the keys as a set.

dosisod commented 2 years ago

I think that chain and chain.from_iterable are good candidates, though groupby might be a hard use case to detect. Unless there is a common pattern where you find groupby is a good replacement, I don't know how we will be able to reliably check for that.

I agree that set is more performant, but some people will prefer the readability over the speed. I am planning on categorizing checks, so you can enable/disable to all performance, readability, etc. checks very easily.

All in all I agree that we should add these checks, but I will have to do some brainstorming to see how I want to go about detecting and emitting errors for them.

Goldziher commented 2 years ago

cool.

BTW, I just tried perflint. Its completely useless at this point. Perhaps if a lot more work was invested into it, but its not doing very smart checks. I would say that everything it covers can be reimplemented - properly. Also it seems that development there is completely frozen.

jamesbraza commented 1 year ago

FURB109 with refurb==1.17.0 suggests tuple for consistency (related: https://github.com/dosisod/refurb/issues/44). The OP here's bullet 3 agrees with https://github.com/dosisod/refurb/issues/44 in that set is preferable over tuple for __contains__ checks with collections.

https://github.com/dosisod/refurb/issues/100 suggests configurability of FURB109's lookup data structure (opting for list or set).

I think "enforcing consistency" (FURB109) is sort of independent of the lookup data structure. Thus I think this can be resolved by just making a new rule (turned on by --enable-all) that has lookups be sets, and then FURB109 will enforce consistent use of the same data structure (set) across the codebase.

Avasam commented 1 year ago

Adding onto this. using itertools.chain to flatten a list is a lot more efficient than using a comprehension:

from itertools import chain
from timeit import timeit

big_nested_list = [["*"] * 9] * 9

def flatten_generator():
    return (
        item for flatten
        in big_nested_list
        for item in flatten
    )

def flatten_chain():
    return chain(*big_nested_list)

def flatten_chain_from_iterable():
    return chain.from_iterable(big_nested_list)

def flatten_list():
    return [
        item for flatten
        in big_nested_list
        for item in flatten
    ]

def flatten_chain_list():
    return list(chain(*big_nested_list))

def flatten_chain_from_iterable_list():
    return list(chain.from_iterable(big_nested_list))

print("flatten_generator          ", timeit(flatten_generator))
print("flatten_chain              ", timeit(flatten_chain))
print("flatten_chain_from_iterable", timeit(flatten_chain_from_iterable))
print("==================")
print("flatten_list                    ", timeit(flatten_list))
print("flatten_chain_list              ", timeit(flatten_chain_list))
print("flatten_chain_from_iterable_list", timeit(flatten_chain_from_iterable_list))

Python 3.9.13:

flatten_generator           0.2695134
flatten_chain               0.1918684
flatten_chain_from_iterable 0.16371810000000003
==================
flatten_list                     2.306337
flatten_chain_list               1.5762885000000004
flatten_chain_from_iterable_list 1.5713778000000005

Python 3.11.6:

flatten_generator           0.37237409997032955
flatten_chain               0.18880709999939427
flatten_chain_from_iterable 0.17884599999524653
==================
flatten_list                     2.7813537000329234
flatten_chain_list               1.6071303999633528
flatten_chain_from_iterable_list 1.6338188999798149

Interestingly, chaining.from_iterable isn't always fastest, and it's a tiny difference, other factors might better indicate a preference between that or unpacking.

Avasam commented 1 year ago

Also on the note of performance rules, contextlib.suppress is (supposedly) much slower than try ... except. This means that a performance rule could go against FURB107. On that same note, the documentation for FURB107 does not mention such a performance impact. Not sure how to best go about it, but it should probably be communicated better.

Sources:

My own test:

from contextlib import suppress
from timeit import timeit

def contextlib_suppress():
    with suppress(ZeroDivisionError):
        1 / 0

def try_except():
    try:
        1 / 0
    except:
        pass

print("contextlib_suppress", timeit(contextlib_suppress))
print("try_except", timeit(try_except))

Python 3.9.13:

contextlib_suppress 0.6523599
try_except 0.24042679999999994

Python 3.11.6:

contextlib_suppress 0.8827133000013418
try_except 0.27326279995031655
dosisod commented 1 year ago

Thank you @Avasam for bringing these up! Your second comment deserves it's own issue/PR, since the docs should mention that it is slower then using try/except. I'll go ahead and do that now.

As for your first comment, chain(*x) and chain.from_iterable(x) do act different under different circumstances: For instance, because chain(*x) has to expand potentially many arguments, for very long lists it is not preferable. Tweaking the sizes of big_nested_list changes the numbers in favor of chain.from_iterable:

big_nested_list = [["*"] * 99] * 99
$ python3 test.py
flatten_generator           0.48842259199591354
flatten_chain               0.5636901309990208
flatten_chain_from_iterable 0.18367693699838128
==================
flatten_list                     140.21291316200222
flatten_chain_list               75.60400127499452
flatten_chain_from_iterable_list 74.88393942899711

Currently using Python 3.11.5

The code in the flatten_generator/flatten_list method would be a good candidate for a Refurb check, so I will go ahead and implement that tomorrow. I've been stuck on how to detect which code should be replaced with chain and chain.from_iterable, so your comment helped in that regard!

Avasam commented 10 months ago

Here's another reason I found today to prefer chain.from_iterable(iterable) over chain(*iterable): I passed it a generator and it only kept the last element repeatedly, oops! (something about variable scope I assume, similar to lambdas in loops)