:Author: David M. Cooke, Francesc Alted, and others. :Maintainer: Francesc Alted :Contact: faltet@gmail.com :URL: https://github.com/pydata/numexpr :Documentation: http://numexpr.readthedocs.io/en/latest/ :GitHub Actions: |actions| :PyPi: |version| :DOI: |doi| :readthedocs: |docs|
.. |actions| image:: https://github.com/pydata/numexpr/workflows/Build/badge.svg :target: https://github.com/pydata/numexpr/actions .. |travis| image:: https://travis-ci.org/pydata/numexpr.png?branch=master :target: https://travis-ci.org/pydata/numexpr .. |docs| image:: https://readthedocs.org/projects/numexpr/badge/?version=latest :target: http://numexpr.readthedocs.io/en/latest .. |doi| image:: https://zenodo.org/badge/doi/10.5281/zenodo.2483274.svg :target: https://doi.org/10.5281/zenodo.2483274 .. |version| image:: https://img.shields.io/pypi/v/numexpr :target: https://pypi.python.org/pypi/numexpr
NumExpr is a fast numerical expression evaluator for NumPy. With it,
expressions that operate on arrays (like :code:'3*a+4*b'
) are accelerated
and use less memory than doing the same calculation in Python.
In addition, its multi-threaded capabilities can make use of all your cores -- which generally results in substantial performance scaling compared to NumPy.
Last but not least, numexpr can make use of Intel's VML (Vector Math Library, normally integrated in its Math Kernel Library, or MKL). This allows further acceleration of transcendent expressions.
The main reason why NumExpr achieves better performance than NumPy is that it avoids allocating memory for intermediate results. This results in better cache utilization and reduces memory access in general. Due to this, NumExpr works best with large arrays.
NumExpr parses expressions into its own op-codes that are then used by an integrated computing virtual machine. The array operands are split into small chunks that easily fit in the cache of the CPU and passed to the virtual machine. The virtual machine then applies the operations on each chunk. It's worth noting that all temporaries and constants in the expression are also chunked. Chunks are distributed among the available cores of the CPU, resulting in highly parallelized code execution.
The result is that NumExpr can get the most of your machine computing
capabilities for array-wise computations. Common speed-ups with regard
to NumPy are usually between 0.95x (for very simple expressions like
:code:'a + 1'
) and 4x (for relatively complex ones like :code:'a*b-4.1*a > 2.5*b'
),
although much higher speed-ups can be achieved for some functions and complex
math operations (up to 15x in some cases).
NumExpr performs best on matrices that are too large to fit in L1 CPU cache. In order to get a better idea on the different speed-ups that can be achieved on your platform, run the provided benchmarks.
From wheels ^^^^^^^^^^^
NumExpr is available for install via pip
for a wide range of platforms and
Python versions (which may be browsed at: https://pypi.org/project/numexpr/#files).
Installation can be performed as::
pip install numexpr
If you are using the Anaconda or Miniconda distribution of Python you may prefer
to use the conda
package manager in this case::
conda install numexpr
From Source ^^^^^^^^^^^
On most *nix systems your compilers will already be present. However if you
are using a virtual environment with a substantially newer version of Python than
your system Python you may be prompted to install a new version of gcc
or clang
.
For Windows, you will need to install the Microsoft Visual C++ Build Tools (which are free) first. The version depends on which version of Python you have installed:
https://wiki.python.org/moin/WindowsCompilers
For Python 3.6+ simply installing the latest version of MSVC build tools should
be sufficient. Note that wheels found via pip do not include MKL support. Wheels
available via conda
will have MKL, if the MKL backend is used for NumPy.
See requirements.txt
for the required version of NumPy.
NumExpr is built in the standard Python way::
python setup.py build install
You can test numexpr
with::
python -c "import numexpr; numexpr.test()"
Do not test NumExpr in the source directory or you will generate import errors.
Enable Intel® MKL support ^^^^^^^^^^^^^^^^^^^^^^^^^
NumExpr includes support for Intel's MKL library. This may provide better performance on Intel architectures, mainly when evaluating transcendental functions (trigonometrical, exponential, ...).
If you have Intel's MKL, copy the site.cfg.example
that comes with the
distribution to site.cfg
and edit the latter file to provide correct paths to
the MKL libraries in your system. After doing this, you can proceed with the
usual building instructions listed above.
Pay attention to the messages during the building process in order to know
whether MKL has been detected or not. Finally, you can check the speed-ups on
your machine by running the bench/vml_timing.py
script (you can play with
different parameters to the set_vml_accuracy_mode()
and set_vml_num_threads()
functions in the script so as to see how it would affect performance).
::
import numpy as np import numexpr as ne
a = np.arange(1e6) # Choose large arrays for better speedups b = np.arange(1e6)
ne.evaluate("a + 1") # a simple expression array([ 1.00000000e+00, 2.00000000e+00, 3.00000000e+00, ..., 9.99998000e+05, 9.99999000e+05, 1.00000000e+06])
ne.evaluate("a b - 4.1 a > 2.5 * b") # a more complex one array([False, False, False, ..., True, True, True], dtype=bool)
ne.evaluate("sin(a) + arcsinh(a/b)") # you can also use functions array([ NaN, 1.72284457, 1.79067101, ..., 1.09567006, 0.17523598, -0.09597844])
s = np.array([b'abba', b'abbb', b'abbcdef']) ne.evaluate("b'abba' == s") # string arrays are supported too array([ True, False, False], dtype=bool)
Please see the official documentation at numexpr.readthedocs.io <https://numexpr.readthedocs.io>
_.
Included is a user guide, benchmark results, and the reference API.
Please see AUTHORS.txt <https://github.com/pydata/numexpr/blob/master/AUTHORS.txt>
_.
NumExpr is distributed under the MIT <http://www.opensource.org/licenses/mit-license.php>
_ license.
.. Local Variables: .. mode: text .. coding: utf-8 .. fill-column: 70 .. End: