Closed vetschn closed 2 months ago
As we are supporting to_dense()
I think we should also support to_sparse()
here are my reasons why:
Inv
method of the GF solvers. For the gpu support, we may want to densify the entire matrix on the GPU instead of the CPU (And I know that this dense solver is a bit of a debug thing, but I feel like if we keep it, it should be supported and implemented correctly)I managed to mostly handle the higher stack dimensions. However, there is a very weird bug in the distributed transposition somewhere that I haven't been able to pin down yet. Most of the time the .dtranspose()
works beautifully, but for some seemingly random combinations of stack_shape
and comm.size
the data on certain ranks is wrong.
The GPU aware implementation is working as well now, i.e. all the datastructure test cases pass both with cupy and numpy backend. The data corruption problem mentioned above is not fixed yet and i need to add a couple more tests.
Everything should work now, last steps are increasing test coverage.
One thing to note: Currently, as we have no GPU-aware MPI, .dtranspose()
does not work with the cupy
backend. Calling it when using cupy
will raise segmentation faults (!)
Edit: I made it so that dsbsparse checks if the MPI backend is GPU-aware.
:warning: Please install the to ensure uploads and comments are reliably processed by Codecov.
Attention: Patch coverage is 81.21547%
with 68 lines
in your changes missing coverage. Please review.
Please upload report for BASE (
dev@67de316
). Learn more about missing BASE report.
:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Have feedback on the report? Share it here.
Package | Line Rate | Health |
---|---|---|
. | 100% | ✔ |
datastructures | 81% | ✔ |
greens_function_solver | 22% | ❌ |
obc | 92% | ✔ |
utils | 64% | ➖ |
Summary | 70% (452 / 650) | ➖ |
This should close #27 and related issues.