mathics / Mathics

This repository is for archival. Please see https://github.com/Mathics3/mathics-core
https://mathics.org
Other
2.07k stars 205 forks source link

LLVM support #1594

Open gtusr opened 1 year ago

gtusr commented 1 year ago

Are there any plans to introduce support for compiling Mathics code down to LLVM?

If so, this would allow Mathics to leverage work that is taking place to bring automatic differentiation to the platform:

Automatic Differentiation for LLVM-hosted languages

TiagoCavalcante commented 1 year ago

The link is off

gtusr commented 1 year ago

Link is now updated.

Adding a link to a native Wolfram Language AD package that may also work in Mathics:

Dual Numbers Package for Automatic Differentiation in Mathematica

rocky commented 1 year ago

Thanks for the link, information and idea.

In the short term though I don't think this can be done in the code base as is. To first approximation, think of Mathics as just a front-end to SymPy, SciKit, PIL, mpmath and so on. We would most benefit if this were done inside one of these other packages that we depend on, rather than doing it one level removed from that.

Let me elaborate a little on this. We do have the ability to compile down to LLVM via Cython and compiled functions. But at the LLVM level Mathics variables are seen as being not typed. I guess they would be seen as pointers to objects. The way you get the variable would be the result of a call.

Right now, the Mathics interpreter is a tree interpreter of M-expressions. For this to have conceivable benefit we would have to somehow turn a Mathics expression into an equivalent Python expression with types included and then that could be compiled down to LLVM and then this work would see enough structure to be able to do something with it.

The Dual Numbers Package that you site uses Automatic Differentiation that is already built into Mathematica. It reimplements Numbers in a way that Mathematica's engine can use its existing Automatic Differentiation on.

mmatera commented 1 year ago

Actually, we have a llvmlite compiler with a limited support for arithmetic functions as well as some loop and control keywords. Look at the mathics.builtin.compile module

rocky commented 1 year ago

Actually, we have a llvmlite compiler with a limited support for arithmetic functions as well as some loop and control keywords. Look at the mathics.builtin.compile module

I don't think that will be very useful for exactly the reasons cited above: wishful thinking that a compiler is going to untangle levels of abstraction, some of which are the root cause of the inefficiency in the first place .

But prove me wrong!

mmatera commented 1 year ago

The inneficient part is the standard evaluation. Then, if you evaluate Compile over the evaluated expression, of the expression is not too complicated, you can get a llvmlite compiled code, that you could take and use in another place. So, the inneficient evaluation process translates into an inneficient compiler, but the resulting object code should be ok.

mmatera commented 1 year ago

This could be useful https://eli.thegreenplace.net/2015/calling-back-into-python-from-llvmlite-jited-code/