this is just me being nitpicky, and I think I'll really be the only one that cares, but are we going for a "less memory usage, more operations" approach or a "more memory usage, less operations" approach?
for example, I'm currently implementing the Gauss-Seidel iteration method for solving linear systems, and I'm faced with the problem of having an extra array OR computing matrix-vector multiplication. the former takes no extra FLOPS but adds a factor of n to space complexity, while the latter takes n^2 extra FLOPS but uses no extra memory.
I could write both methods, of course, and have different naming conventions. but yeah.
this is just me being nitpicky, and I think I'll really be the only one that cares, but are we going for a "less memory usage, more operations" approach or a "more memory usage, less operations" approach?
for example, I'm currently implementing the Gauss-Seidel iteration method for solving linear systems, and I'm faced with the problem of having an extra array OR computing matrix-vector multiplication. the former takes no extra FLOPS but adds a factor of n to space complexity, while the latter takes n^2 extra FLOPS but uses no extra memory.
I could write both methods, of course, and have different naming conventions. but yeah.