Open mahrud opened 3 years ago
There's no reason this should take 10 seconds:
i7 : elapsedTime for i in 0..10000000 do ();
-- 11.1732 seconds elapsed
Compare with python, using iterators:
$ time python -c 'for i in range(10000000): pass'
real 0m0.359s
user 0m0.347s
sys 0m0.010s
The problem is partially that 0..N
constructs a list very inefficiently, but also basic loops are unreasonably slow:
i8 : elapsedTime for i to 10000000 do ();
-- 4.34505 seconds elapsed
Perhaps the difference is that python uses 32 bit integers for the values of i, whereas we use bignumbers.
Perhaps the difference is that python uses 32 bit integers for the values of i, whereas we use bignumbers.
Nope:
$ time python -c 'for i in range(2**1024, 2**1024+10000000): pass'
real 0m1.175s
user 0m1.164s
sys 0m0.008s
A cool feature of more developed languages like Julia or Python is that they have iterator generators. It would be a very cool feature if M2 supported something like the
yield
keyword in Python, so you could have something likefor p in nextPrime N
. This would require the interpreter to make a new call to that function on every cycle to get the next number.Other than the memory advantage, another benefit of
yield
is a time speedup: in thenextPrime
example, if a loop over primes ends at17
, there's no need to pre-compute all primes underN
. Therefore, async generators are a good milestone towards async functions.Another related problem that would be solved with iterators is that
accumulate
andfold
usereverse
anddrop
, which unnecessarily copy the data, hence an optimized copy-free implementation ofaccumulate
andfold
would be more efficient.Originally posted by @mahrud in https://github.com/Macaulay2/M2/issues/1897#issuecomment-769958455