spectralDNS / mpiFFT4py

Parallel FFT in 3D or 2D using MPI for Python. Slab or pencil decomposition possible in 3D. Note this rep is being deprecated in favour of mpi4py-fft (https://bitbucket.org/mpi4py/mpi4py-fft)
GNU Lesser General Public License v3.0
21 stars 10 forks source link

get_local_wavenumbermesh() returns a list #10

Open natschil opened 7 years ago

natschil commented 7 years ago

I've noticed that get_local_wavenumbermesh() returns a list as opposed to a numpy array(at least for scaled=False, and using one process). Is this intentional?

mikaem commented 7 years ago

Yes, this is intentional. It's sparse storage. The wavenumers in 3D are now computed first as

Ks = np.meshgrid(kx, ky, kz, indexing='ij', sparse=True)

where Ks = [Kx, Ky, Kz], and where Kx has shape (Nx, 1, 1), Ky has shape (1, Ny, 1) and Kz (1, 1, Nz). Each one may then be used as a 3D array by itself, and Numpy will take care of the rest due to broadcasting. Before returning, the arrays are broadcasted manually, though, using

K = [np.broadcast_to(k, self.complex_shape()) for k in Ks]

This does not allocate any data, though, and the storage is still for three 1D arrays, even though K[0].shape now is the same as complex_shape. You can still get the old dense array by taking numpy.array(K). Look at K[0].strides, it's really cool:-)