Open gu1p opened 3 years ago
Yes, when I say we get rid of the loop, I meant that the loop has been externalized at the C level (hopefully) and this is supposed to be faster because we do not have all the Python machinery for loop. To have a fair comparion, we should used the exact same random generator in two cases:
for i in range(1_000_000):
Z = Z + random.choices([-1,1])[0]
Z = sum(random.choices([-1,1],k=1_000_000))
When I said "hidden in the machinery of Python" I meant vanilla Python code, but that comes from the standard library (not necessarily implemented in C). Most of the difference in speed execution (7 times faster vs 1.6 faster) between both functions is just because we're using two different ways of computing the next random step: 2*random.randint(0, 1)-1
vs [-1, +1][math.floor(random.random() * 2)]
. The last one is basically hidden in random.choices
(that is a vanilla Python function) that is used in random_walk_faster
. So we are making an unfair comparison and my whole point is that we should use the same underlying functions to calculate the next random step in both functions in order to make a fair comparison . random_walk_faster
will continue to be faster, but not 7 times faster.
Sorry if all this looks nitpicking. The book is great!
Thank and no problem with nitpicking. When comparing the exact same method with an without loop, I still find the 7x factor:
In [12]: %timeit for i in range(1_000_000): random.choices([-1,1])
1.01 s ± 15.6 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
In [13]: %timeit random.choices([-1,1], k=1_000_000)
146 ms ± 1.45 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
If you see how random.choices
works you will notice that what you actually doing is approximately the following:
population = [-1, 1]
In [39]: %timeit for i in range(1_000_000): [population[floor(random.random() * len(population))] for i in range(1)]
535 ms ± 24.9 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
In [40]: %timeit for i in range(1_000_000): population[floor(random.random() * 2)]
173 ms ± 3 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
That's obviously faster. But C doesn't play much role here. It's just Python doing more (unnecessary) stuff in one case vs Python doing less stuff in the another. But OK, I think at this point I made myself clear enough.
I was reading the Introduction of the book and I found myself trying to understand why
random_walk_faster
is ~7 times faster thanrandom_walk
. That's a huge difference!Those are the functions:
Running some tests, I figure out that most of the difference in speed is related to the way we compute the next random step, not because we are using a "vectorized approach" instead of a procedural, as stated in the explanation:
This last statement is also not accurate. Actually, we are just replacing an explicit loop for other loops hidden in the machinery of Python (that can, of course, rely on faster functions implemented in C).
Looking at the implementation of
random.choices
, the trick to gain speed is to use something likepopulation[floor(random.random() * len(population)]
to compute each next step.Applying this in the
random_walk
, we have:Now the difference of speed for 10k steps is ~1.6.