exaloop / codon

A high-performance, zero-overhead, extensible Python compiler using LLVM
https://docs.exaloop.io/codon
Other
13.95k stars 498 forks source link

Codon slower than PyPy, can't find out why #553

Open Tenchi2xh opened 2 months ago

Tenchi2xh commented 2 months ago

Hi, love the project!

I recently started implementing a ray tracer as an exercise for trying out Codon. After a while, I was curious to try and make the code work with vanilla Python and PyPy, and then found out that my renders are about twice as fast with PyPy compared to Codon.

image

After trying a few optimizations to no avail, I decided to try and profile the execution of the Codon-made binary:

image

It appears that more than half the time is spent on some internal gc.alloc_atomic, and also starting threads? (I have zero @par in the whole codebase).

I noticed that when using time, the user time is often twice as much as the real time (something in a thread is doing something). And in turn the real time is still twice as much as PyPy's.

My suspicion is that creating a lot of Vec3 classes all the time is somehow bogging down the GC. Maybe I have a basic misconception of how to use Codon?

Here is an interactive version of the flame graph (unzip and open the SVG in a browser), and the code is available here: https://github.com/Tenchi2xh/RTOW-Codon (check out commit 379d5d0, the master branch now has other types of optimizations). The main entry point is rtow/__main__.py but it's easier to run it from the run.sh script (a preprocessor has to remove python-specific stuff). To run it faster, just reduce samples_per_pixel and max_depth on lines 52-53 (it runs even slower in the profiler)

(Sorry to link to a whole repo, it's not a big codebase, but big enough to make it hard to produce a minimal reproducible example for a Github issue)

I am using the latest dev build of Codon, downloaded from a CI build

Tenchi2xh commented 2 months ago

Update, after implementing an algorithm that reduces the number of lookups, the scales tipped to the other side:

The optimization alone makes Codon 35x faster, making the same render go from 11 minutes to a mere 19 seconds

The flame graph still looks the same with or without the optimization, so maybe something else is at play (or using dtrace messes up with Codon?)