Open LPanosTT opened 3 days ago
Instead of doing this hack in the runtime, can we lower this sharding option from Forge as an override?
Only some of the convolutions in resnet actually hit this if statement. So I’m n sure how an override would work. Also have an issue here to get this modelled in ttnn https://github.com/tenstorrent/tt-metal/issues/13107
Conv2d will shard the input tensor on its own, provided a sharding specification. By default it will attempt to use
HEIGHT_SHARDED
, but for some convolutionsBLOCK_SHARDED
orWIDTH_SHARDED
might be required. Currently, ttnn::conv2d is not able to determine which sharding specification to use on its own. There's an issue for this: https://github.com/tenstorrent/tt-metal/issues/13107. When #818 is merged, the following metric will be used to determine wether to useBLOCK_SHARDED
or stick with the defaultHEIGHT_SHARDED
:See
runtime/lib/ttnn/operations/conv/conv2d.cpp