asr_process() splits the data into chunks according to available memory, however the moving_average() function within it ends up operating on the covariance matrix, thereby multiplying the size of each chunk by the number of channels. it also appears that it then doubles the size of the covariance matrix, as well. so, for example, if there are 32 channels, the chunks are 64x too large. this seems to lead to consistent out of memory errors.
simply dividing the chunk size by 2*nchan yields insufficiently small chunks (7 samples, in my case). i attempted to input each channelxchannel vector individually into the moving_average function, but this ends up running too slowly. perhaps there is a simpler solution i'm not seeing.
asr_process() splits the data into chunks according to available memory, however the moving_average() function within it ends up operating on the covariance matrix, thereby multiplying the size of each chunk by the number of channels. it also appears that it then doubles the size of the covariance matrix, as well. so, for example, if there are 32 channels, the chunks are 64x too large. this seems to lead to consistent out of memory errors.
simply dividing the chunk size by 2*nchan yields insufficiently small chunks (7 samples, in my case). i attempted to input each channelxchannel vector individually into the moving_average function, but this ends up running too slowly. perhaps there is a simpler solution i'm not seeing.