Open Naereen opened 3 years ago
Hi Lilian. Thanks for creating the issue and pointing this out. I just timed it myself and got close to the same as you (1 minute, 19 seconds). I think we may have gotten mixed up and included the wrong number in the documentation. If you increase the larger number to 800052312523
, it takes more than 10 minutes to run (636s). It's a contrived example in any case, so I admit it's not the most useful or interesting.
The Coral code takes 0m0.169s
to run, and the C program actually (on my system) takes 0m11.054s
. Which is sort of crazy. Faster than C, but that's probably due to overhead in the printf statement.
Here's a C program that does the same thing (.txt so Github lets me upload it). gcc.txt
Hi @jacobaustin123, Thanks for replying nicely.
I could suggest to write a short benchmark.md
document, including the full C source code, command line arguments to compile it (gcc -O0
or -O1
or -O2
or -O3
gives remarkable differences on simple programs), and times of Python3/PyPy vs Coral vs C.
If you suspect that the overhead in C comes from printf
... maybe try to write a more fair C version, and don't use printf
but a simpler print function? Or just don't print and also show the time comparison?
I'm not saying you should try to be as detailed as Julia language's benchmarks, but it definitely won't be hard to be more detailed than the current vague claim in the introduction of the Coral's README.md
.
For scientific rigour, it would really help to detail a little bit more.
(but I'm not sure how mature is this project, I remember the README mentioning a students projects from back in 2018. If Coral is not actively developed and maintained any more, don't mind my remarks)
Regards, @Naereen
Also, in this benchmark I could write Cython version of the GCD program (see cython.org) and Numba too.
This benchmark could also be written in a Jupyter notebook, as it is easy to write code cells in Python 3, but also other languages. For example I wrote these notebooks for various tiny projects, a few years ago:
@jacobaustin123 any feedback?
Hi @Naereen thank you for following up. I apologize for the delay. Started a new job recently which has kept me busy, and Coral hasn't been under active development. I've added a small benchmark.md
file containing results for different optimization levels. I tried to follow your examples closely, although I couldn't use a Jupyter Notebook.
Let me know your thoughts. In particular, the comparison to Python seems fair and unsurprising. For -O0 and -O1 optimization, Coral dramatically outperforms C, but I am also not convinced that this comparison is fair. It seems likely that somehow Coral-generated LLVM is already optimized so even -O0 compiles away most of the computation. I included the -O0 assembly, but I'm not familiar enough with x86 to say for sure.
Thanks for holding me to this.
Hi @jacobaustin123, Don't worry about the delay, it's good to read that you are interested and willing to provide additional details on this. Last time I opened such an issue, I got very bad replies...
I'll have a look, thanks!
https://github.com/jacobaustin123/Coral/blob/master/benchmark.md
Hi,
I tested your first "gcd" example, and got 1 minute instead of 10 minutes in Python 3.6? In PyPy3, I got about one second too:
So my issue is simply to ask, on what machine and what version of Python did you obtain this "10 minutes to run" you mentioned? I would prefer to see a more accurate claim, as 1 minute seems to be more correct. (and writing a C version, and checking this "it takes less than a second - nearly as fast as C code." claim, could also help) Thanks in advance!