Closed confluence closed 3 years ago
Cool thanks for adding this! You saved me the effort! This would be a very welcomed feature.
A related feature would be to make the memory usage more versatile. For this, the main useful feature would be to load in more than 1 frequency channel at a time, but something between all of them and 1. You could even utilise shared memory across nodes via MPI, but this might be overkill. I recall Angus said you planned to make it require less memory during the default mode (i.e. not slow mode). I'm not sure if you did that already, but that would also be useful. On Pawsey, you can get a node with 1 TB RAM.
Let me know if you want me to request this as a separate issue.
I just noticed issue #4 regarding improving memory usage.
This is the opposite of that; now we want to make memory usage worse to make it faster. :)
I was rather referring to issue #4 in reference to this statement of mine, which I said before seeing that issue
I recall Angus said you planned to make it require less memory during the default mode (i.e. not slow mode).
As discussed, this issue has been superseded by #44.
The slow mode is currently not parallelized at all. This makes it very slow. Some of the per-channel calculations are definitely independent and can be done in parallel.