Open kburns opened 2 years ago
I've added some work in the d3_precision branch to support double-double types via xprec in the Jacobi quadrature and polynomial construction. This just adds precision, but not range, so it doesn't help the underflow issues with constructing the polynomials. However in the waves on a string EVP test problem, the better precision in the BC rows seems to result in the EVP finding purely real eigenvalues, instead of eigenvalues with ~roundoff complex parts. So maybe figuring out how to construct all the matrices in high precision could help with eigenvalue accuracy, etc.
@gmvasil just found this, might be useful for writing true arbitrary precision routines for the jacobi stuff: https://github.com/c-f-h/flamp
It looks like ARM processors like the ones on the new macbooks do not support
np.float128
. As a first step, we can changenp.float128
tonp.longdouble
which will fallback tonp.float64
on architectures withoutnp.float128
, so the code will still run, though we probably want to emit warnings in this case, since we know it will cause problems at moderate resolutions.Of course we could just not support ARM... but I don't think that's a great idea, since all the macs are going in that direction and the performance is really good (I'm seeing 3x+ speedup over my intel macbook in running Dedalus).
Longer term I think this means we might need a new strategy for our Jacobi recursions, etc. We might be able to find/use software-based high-precision library like mpmath.