In case user passes in interleaved tensor for conv activations, we should automatically select best sharding scheme given input tensors for the convolution.
Primary motivation of this work is to reduce number of Out-of-memory issues coming out of pytorch 2.0 ttnn integration.
In case user passes in interleaved tensor for conv activations, we should automatically select best sharding scheme given input tensors for the convolution. Primary motivation of this work is to reduce number of Out-of-memory issues coming out of pytorch 2.0 ttnn integration.