Open hubertlu-tw opened 4 years ago
The MXU needs dot products of length at least 128 for full throughput; for a convolution, that's the product of all kernel size dimensions and the input feature count. In your case the dot product length is only 3, so you can use at best 3/128 of the MXU flops (and likely even less since you have low arithmetic intensity). Unrolled elementwise computations, as in your first snippet, are typically a better way to implement finite difference/stencil convolutions.
I'm not sure how well it works with JAX on Cloud TPU right now, but the Cloud TPU Profiler can be useful in figuring out how XLA compiles operations for the hardware, and how well they utilize MXU flops and memory bandwidth.
Hi, I am currently investigating possibilities of using JAX for scientific computation on TPUs. Thanks to the excellent work in a JAX tutorial (https://github.com/google/jax/blob/master/cloud_tpu_colabs/Wave_Equation.ipynb), it helped me understand how to use JAX and its advantages more quickly. However, one of the questions I have is why the performance of using convolution-based ops for stencil computations is much worse than that of using the element-wise ops on Cloud TPU.
To take advantage of the compute power of MXU in TPU, I use two 1D convolution ops for the stencil computation in analogy to the element-wise ops. The following snippets are for 5-point stencil computations in 2D problems.
Element-wise ops:
Convolution-based ops: