Update: the above is on one machine/JAX combination. Others show less of an effect, sometimes inverted. We may prefer to leave this on a branch as an unconclusive experiment.
The improved compile time is probably because the jaxpr is more compact for the lookup case. We might hope that runtime on IPU will be good because the problem is converted to a simple matrix-vector multiply, but at 2*LMAX=8, meaning 128K of memory to store the dense LUT, it may be necessary to sparsely encode it, at which point, a simple C++ vertex implementation may be the better option.
Plumbed in the lookup table from #123, and parametrized tests to compare performance.
TLDR: minimal effect on runtime on CPU/IPUModel, useful improvement in compile time.
Update: the above is on one machine/JAX combination. Others show less of an effect, sometimes inverted. We may prefer to leave this on a branch as an unconclusive experiment.
The improved compile time is probably because the jaxpr is more compact for the lookup case. We might hope that runtime on IPU will be good because the problem is converted to a simple matrix-vector multiply, but at 2*LMAX=8, meaning 128K of memory to store the dense LUT, it may be necessary to sparsely encode it, at which point, a simple C++ vertex implementation may be the better option.
The above results were generated with instrumented tests, but that instrumentation is removed for checkin - we don't want to run them all the time. That branch is at https://github.com/graphcore-research/pyscf-ipu/blob/777e09651ebe0d2d5b152b0c450886da521fa407/test/test_integrals.py#L99