evhub / coconut

Simple, elegant, Pythonic functional programming.
http://coconut-lang.org
Apache License 2.0
4.05k stars 120 forks source link

Coconut compiler constant overhead 170 times the overhead of the python interpreter #764

Closed koraa closed 1 year ago

koraa commented 1 year ago

Thank you for writing coconut. I really like the language, but being productive is hard since the compiler is so slow.

$ time python -c "def x(a): return a"
________________________________________________________
Executed in   61.30 millis    fish           external
   usr time   94.30 millis   45.31 millis   48.99 millis
   sys time   24.50 millis   14.71 millis    9.78 millis
$ time coconut -c "def x(a): return a"
________________________________________________________
Executed in   10.45 secs    fish           external
   usr time   10.22 secs    0.00 millis   10.22 secs
   sys time    0.11 secs    1.58 millis    0.11 secs
$ python -c "$(echo "import time; P = time.perf_counter; a = P(); def f(a): return a; print(P() - a)" | sed 's@; *@\n@g')"
2.8030008252244443e-06
$ coconut -c "$(echo "import time; P = time.perf_counter; a = P(); def f(a): return a; print(P() - a)" | sed 's@; *@\n@g')"
2.4760011001490057e-06
evhub commented 1 year ago

Note that the vast majority of what you're seeing there is constant overhead:

❯ time coconut -c "def x(a): return a"
coconut -c "def x(a): return a"  1.91s user 2.07s system 244% cpu 1.626 total
❯ time coconut -c """def x(a): return a
def y(b): return b"""
coconut -c """def x(a): return a def y(b): return b"""  2.19s user 1.82s system 240% cpu 1.666 total

Compiling a large file is going to take a bit anyway, so an extra 1-2 seconds of constant overhead won't make a huge difference there, but will dominate these sorts of micro benchmarks.

What size of compilation jobs are you generally doing? Is it actually the constant overhead you're showcasing here that you're struggling with?

koraa commented 1 year ago

What size of compilation jobs are you generally doing? Is it actually the constant overhead you're showcasing here that you're struggling with?

I am working with tiny files; the above examples are an excellent representation of what I am working with because my workflow involves testing snippets of code in the REPL frequently. I also use files with snippets of code I would like to test; they are not one-liners but they are still very short.

The constant overhead completely breaks my fast-paced development cycle.

The 5x difference between our benchmarks likely comes from the fact that I work on a ten year old thinkpad; I suppose on your machine the extra overhead does not feel quite as bad.

Note that the difference between the python code and the coconut code in the above benchmark is a factor of 170 which in any case is very bad.

evhub commented 1 year ago

It looks like there was an issue where parsing functions specifically would introduce substantially greater constant overhead; that issue should be fixed now as of coconut-develop>=3.0.2-post_dev8 (pip uninstall coconut and pip install coconut-develop>=3.0.2-post_dev8 to get the fix). In my testing, that fix seems to have reduced time for the microbenchmark by about 4x:

~ @ timeit "coconut -c 'def f(x) = x'"  # pre-fix
1 loops, best of 3: 2.62 s per loop
~ @ timeit "coconut -c 'def f(x) = x'"  # post-fix
1 loops, best of 3: 583 ms per loop