Currently, when working with large arrays, the BLMM code utilizes the numpy memory map. This is extremely fast at first but as more memory maps are read in and out, the code appears to slow down, despite said memory maps being removed from memory and flushed. This appears to be a common problem on stack overflow and, as a result, perhaps alternative packages should be used.
I have tried h5py but found it's performance was worse than the numpy memory map. As all of these objects are built on the python mmap object, it may be best to wait until better support exists for this before trying out any other packages.
This issue is however a low priority, as the "slowing down" mentioned above is only observed for extremely large designs (likely much larger than the average user would ever want).
Currently, when working with large arrays, the BLMM code utilizes the numpy memory map. This is extremely fast at first but as more memory maps are read in and out, the code appears to slow down, despite said memory maps being removed from memory and flushed. This appears to be a common problem on stack overflow and, as a result, perhaps alternative packages should be used.
I have tried
h5py
but found it's performance was worse than the numpy memory map. As all of these objects are built on the pythonmmap
object, it may be best to wait until better support exists for this before trying out any other packages.This issue is however a low priority, as the "slowing down" mentioned above is only observed for extremely large designs (likely much larger than the average user would ever want).