Open MothNik opened 3 months ago
The last commit currently does not change the internal logic at all.
However, it already forms the foundation of future validations, e.g., by means of the condition number of the matrix.
Right now, pentapy
does never assess or allow the user to assess the quality of the solves because the result not being all np.nan
, this does not mean that the solve was meaningful.
If the user wants to have the results validated, one could introduce validate
to the Python high-level interface of solve
.
The C-level interface was broken anyway, so I tried to keep it flexible for all given future scenarios. Right now, the validation logic does not trigger anything, so this buys flexibility in the future at no cost today.
The idea was taken from LAPACK. It returns an info
-value together with the solution. It is usually 0
to indicate success while all other values encode certain warnings or even errors.
To be able to cope with #28 and #29 in the future without compromising anything on the solvers side, I rearranged the Cython core (not the interface) as follows:
enum
s were used to provide additional safety for value comparisons and also to make the code more concise by increasing verbositythe c_penta_solver1
and c_penta_solver2
interface are now calling two common functions that
A
with the respective PTRANS-factorization (I for the first, II for the second)This allowed the removal of the redundant logic (factorize -> solve) that was implemented in both the c_penta_solver1
and c_penta_solver2
interface, thereby removing one critical potential point of failure.
these two common functions are now responsible for all memory allocation and they return the freshly allocated memoryviews of the factorized matrix and the result, respectively. Before that, memory allocation was handled via np.empty
in both the c_penta_solver1
and c_penta_solver2
interface.
This in turn now comes with the added bonus, that other interface functions for
can easily be implemented on top and just accept the fresh memoryviews that are return by the common functions. All that's required now for such an interface function is that it calls the common function and casts the returned memoryview to an Array. With these changes, the computation
or just a solve of another right-hand side
can be triggered from Python and Cython after a factorization was obtained.
This is probably a bit too abstract (I barely know how phrase it), so please let me provide the following
Analogy Example
In the basic principle, this is identical to how the banded Cholesky decomposition of SciPy works. For solving a system, there is scipy.linalg.solveh_banded
. However, there is also scipy.linalg.cholesky_banded
that can perform a banded Cholesky factorization and scipy.linalg.cho_solve_banded
that can take the banded Cholesky factorization to solve one or more systems.
As of now, this PR basically implements the scipy.linalg.solveh_banded
-analaogy of pentay
(factorize and solve in one go), but on the Cython-side I arranged everything that it's super simple to also add the analogies of scipy.linalg.cholesky_banded
(factorize only) and scipy.linalg.cho_solve_banded
(solve system using a given factorization). But this would probably explode the frame of this already big PR 🤯
@MuellerSeb Sorry for all these iterations. It tooks me some time to get the right strategy for Cython, the GIL, Memory Allocation, the Parallelization, and a testing framework (they all pass still ✅).
But now I feel that this PR is ready for review and I would highly appreciate your feedback 😄 I really wanted to keep everything flexible for future adjustments. It might be true that pentapy
is "only" there for solving pentadiagonal systems, so the number of features will never be as mindblowing as SciPy, but playing with the factorizations (like with the determinant) is a common step in mathematical problems and I really wanted to pave way for that 🛣️
⚠️ This pull request can fully replace #26 which can basically be closed ⚠️
Update June 10, 2024
I temporarily converted this to a draft because the decisions for #28 heavily affect this PR because this might involve another change in the Cython-level API.
Changes
This pull request is an improved version of #26, so please refer to the basic changes in there. On top of these changes, this pull request
tools
-modulesolver.pyxd
so that the solvers only accept C-contiguous Arrays. This will not be backwards compatible because it could already be that users pass Fortran-contiguous Arrays (maybe without even knowing).setup.py
to include OpenMPworker
-argument to the Python interfaceSince all that is a breaking change and the Cython interface is not backwards-compatible, I suggest that this is a major version jump from 1 to 2 (maybe as an $\alpha$-version of it?). Again, I tried to update the changelog, but it might be that you need to still look over it.
Tests
The installation and parallelisation were tested on Windows 11 and Ubuntu (WSL). ✅ Tests now cover serial and parallelel solves, but they have to run with
now to prevent interfering with the multithreading. ✅ ⚠️ Note that the
-x
cancels the process for the first failure which is mandatory for this delicate topic where we deal with pointers ⚠️ Depending on the OS, it might be that the doctests oftests/util_funcs
fail because the Array output includes thedtype
on Windows while it does not on Linux. ❌ Given that the C-implementation is already very fast in serial, the parallelization does not cause a quantum leap. With 8 threads on a relatively old laptop (there it is, the grain of salt 🧂), I observe a threefold speedup for huge systems (1,000 x 1,000 with 10,000 right-hand sides):On massively parallel systems, this will give better speedups. However, it does not negatively affect the serial solves when
workers=1
: