This project allows to run PyTorch code on Tenstorrent hardware.
The table below summarizes the results of running various ML models through our TTNN compiler. For each model, we track whether the run was successful, the number of operations before and after conversion, the number of to_device
and from_device
operations, performance metrics, and accuracy.
Model | Status | Torch Ops Before (Unique Ops) | Torch Ops Remain (Unique Ops) | To/From Device Ops | Original Run Time (ms) | Compiled Run Time (ms) | Accuracy (%) |
---|---|---|---|---|---|---|---|
[Autoencoder (conv)](<docs/models/Autoencoder (conv)>) | π§ | 9 (3) | 5 (2) | 1 | 1489.96 | 3542.07 | 100.0 |
[Autoencoder (conv)-train](<docs/models/Autoencoder (conv)-train>) | π§ | 24 (7) | 21 (6) | 0 | 2324.91 | 992.7 | 100.0 |
[Autoencoder (linear)](<docs/models/Autoencoder (linear)>) | β | 22 (3) | 0 (0) | 0 | 1392.5 | 282.69 | 100.0 |
[Autoencoder (linear)-train](<docs/models/Autoencoder (linear)-train>) | π§ | 104 (8) | 26 (3) | 0 | 2309.23 | 4840.16 | 100.0 |
BERT | β | 1393 (21) | 0 (0) | 26 | 99971.5 | 36283.68 | 99.69 |
Bloom | π§ | 1407 (29) | 3 (1) | 0 | 41618.3 | 52495.82 | 39.93 |
CLIP | π§ | 1396 (30) | 15 (13) | 3 | 4926.66 | 20537.82 | 94.18 |
CLIP-train | β | 3943 (44) | N/A | N/A | 29515.5 | N/A | N/A |
DETR | β | 1668 (41) | N/A | N/A | 108137 | N/A | N/A |
DPR | π§ | 720 (22) | 15 (3) | 4 | 30667.6 | 15553.39 | 99.29 |
FLAN-T5 | β | 20106 (38) | N/A | N/A | 5213.06 | N/A | N/A |
Falcon | π§ | 73 (7) | 4 (3) | 0 | 138173 | 44567.74 | 100.0 |
GLPN-KITTI | β | 3074 (30) | N/A | N/A | 94235.3 | N/A | N/A |
GPT-2 | π§ | 748 (31) | 69 (12) | 2 | 16065.3 | 16301.19 | 100.0 |
GPTNeo | β | 2761 (36) | N/A | N/A | 16257.7 | N/A | N/A |
[Hand Landmark](<docs/models/Hand Landmark>) | β | N/A | N/A | N/A | 7882.95 | N/A | N/A |
HardNet | π§ | 245 (10) | 171 (4) | 2 | 6151.84 | 20249.35 | 5.37 |
HardNet-train | π§ | 867 (21) | 582 (13) | 0 | 13486.2 | 55927.33 | 100.0 |
Llama | π§ | 42 (12) | 3 (2) | 0 | 375596 | 169942.51 | 100.0 |
MLPMixer | π§ | 253 (11) | 25 (2) | 0 | 5704.78 | 10042.08 | 99.97 |
MLPMixer-train | π§ | 616 (19) | 127 (8) | 0 | 18127.8 | 33132.84 | 100.0 |
Mnist | π§ | 14 (8) | 2 (1) | 1 | 3817.27 | 7914.6 | 98.48 |
Mnist-train | π§ | 46 (15) | 20 (8) | 0 | 3630.66 | 5525.71 | 100.0 |
MobileNetSSD | β | 575 (34) | N/A | N/A | 867.43 | N/A | N/A |
MobileNetV2 | π§ | 154 (9) | 104 (2) | 0 | 999.08 | 26590.85 | 4.39 |
OPT | β | 4073 (32) | N/A | N/A | 31556.6 | N/A | N/A |
[OpenPose V2](<docs/models/OpenPose V2>) | π§ | 155 (7) | 92 (3) | 0 | 2783.07 | 6283.54 | 93.11 |
[OpenPose V2-train](<docs/models/OpenPose V2-train>) | π§ | 523 (14) | 444 (11) | 0 | 9812.92 | 21020.2 | 100.0 |
[Perceiver IO](<docs/models/Perceiver IO>) | β | 1532 (21) | 0 (0) | 30 | 53539.5 | 49574.94 | 99.95 |
ResNet18 | π§ | 70 (9) | 40 (2) | 1 | 2057.31 | 6835.19 | 30.71 |
ResNet18-train | π§ | 241 (19) | 196 (13) | 0 | 6678.09 | 15061.53 | 100.0 |
ResNet50 | π§ | 176 (9) | 106 (2) | 1 | 4375.74 | 17995.03 | 4.56 |
ResNet50-train | π§ | 616 (19) | 523 (13) | 0 | 14499.8 | 29465.36 | 100.0 |
RoBERTa | π§ | 719 (21) | 2 (2) | 16 | 28009.2 | 38253.79 | 98.64 |
SegFormer | π§ | 768 (27) | 95 (9) | 8 | 37472.6 | 45290.55 | 99.54 |
SegFormer-train | β | 1872 (40) | N/A | N/A | 78406.7 | N/A | N/A |
SqueezeBERT | β | 16 (9) | 0 (0) | 4 | 3955.83 | 5946.72 | 100.0 |
[Stable Diffusion V2](<docs/models/Stable Diffusion V2>) | β | 1883 (32) | N/A | N/A | 836460 | N/A | N/A |
U-Net | π§ | 68 (6) | 45 (3) | 4 | 59473.7 | 76242.75 | 100.0 |
U-Net-train | π§ | 236 (15) | 205 (12) | 0 | 108687 | 119046.55 | 100.0 |
Unet-brain | π§ | 68 (6) | 45 (3) | 4 | 60200.4 | 60821.92 | N/A |
Unet-brain-train | π§ | 236 (15) | 205 (12) | 0 | 106677 | 111016.67 | 100.0 |
Unet-carvana | π§ | 67 (5) | 45 (3) | 4 | 84304.3 | 100408.9 | 100.0 |
Unet-carvana-train | π§ | 232 (13) | 202 (11) | 0 | 171999 | 180497.39 | 100.0 |
ViLT | π§ | 55 (18) | 27 (11) | 3 | 21129.6 | 23886.35 | 87.86 |
Whisper | β | 4294 (19) | N/A | N/A | 245675 | N/A | N/A |
XGLM | π§ | 1459 (30) | 38 (10) | 1 | 18040.7 | 41148.28 | 95.48 |
YOLOS | π§ | 966 (28) | 51 (7) | 0 | 13332.1 | 31968.49 | 97.5 |
YOLOv3 | π§ | 268 (10) | 159 (6) | 0 | 174021 | 162746.84 | 99.99 |
YOLOv5 | π§ | 3 (3) | 2 (2) | 0 | 22031.6 | 25693.19 | 100.0 |
albert/albert-base-v2 | π§ | 791 (21) | 14 (3) | 3 | 3011.28 | 23901.44 | 68.8 |
albert/albert-base-v2-classification | β | 779 (21) | 0 (0) | 16 | 2873.72 | 9898.39 | 99.96 |
albert/albert-large-v2 | π§ | 1547 (21) | 26 (3) | 3 | 4794.46 | 22593.55 | 24.89 |
albert/albert-xlarge-v2 | π§ | 1547 (21) | 26 (3) | 3 | 15835.1 | 25157.86 | 51.05 |
albert/albert-xxlarge-v2 | π§ | 791 (21) | 38 (4) | 3 | 42833.7 | 30896.23 | 22.25 |
codegen | β | 9237 (37) | N/A | N/A | 15937.3 | N/A | N/A |
densenet121 | π§ | 432 (10) | 306 (4) | 1 | 3178.49 | 22490.52 | 84.37 |
densenet161 | π§ | 572 (10) | 407 (5) | 0 | 8559.66 | 38528.91 | 76.93 |
densenet169 | π§ | 600 (10) | 426 (4) | 1 | 3731.38 | 24735.83 | 80.86 |
densenet201 | π§ | 712 (10) | 506 (4) | 1 | 4828.75 | 47292.4 | 23.71 |
distilbert-base-uncased | π§ | 367 (17) | 13 (3) | 8 | 10440.4 | 12878.42 | 99.7 |
dla34.in1k | π§ | 135 (9) | 81 (3) | 6 | 6919.15 | 18316.31 | 8.94 |
dla34.in1k-train | π§ | 469 (18) | 378 (13) | 0 | 13506.6 | 22493.08 | 100.0 |
ese_vovnet19b_dw.ra_in1k | π§ | 111 (12) | 71 (4) | 0 | 3032.98 | 17526.69 | 2.56 |
ese_vovnet19b_dw.ra_in1k-train | π§ | 360 (25) | 272 (14) | 0 | 5309.5 | 19733.01 | 100.0 |
facebook/deit-base-patch16-224 | π§ | 685 (17) | 14 (3) | 0 | 16993.5 | 22670.37 | 98.19 |
facebook/deit-base-patch16-224-train | π§ | 1854 (27) | 212 (12) | 0 | 74134.4 | 33761.48 | 100.0 |
ghostnet_100.in1k | π§ | 515 (14) | 255 (4) | 0 | 1185.52 | 24067.63 | 53.78 |
ghostnet_100.in1k-train | π§ | 1468 (33) | 1036 (22) | 0 | 6093.75 | 36399.9 | 100.0 |
ghostnetv2_100.in1k | π§ | 809 (20) | 397 (10) | 0 | 1949.32 | 36935.57 | 99.93 |
ghostnetv2_100.in1k-train | β | 2126 (41) | N/A | N/A | 6191.66 | N/A | N/A |
googlenet | π§ | 214 (15) | 137 (4) | 0 | 1892.64 | 18383.9 | 83.93 |
hrnet_w18.ms_aug_in1k | π§ | 1488 (14) | 730 (7) | 0 | 5862.75 | 42134.2 | 8.39 |
hrnet_w18.ms_aug_in1k-train | β | 4277 (24) | N/A | N/A | 15528.3 | N/A | N/A |
inception_v4.tf_in1k | π§ | 495 (11) | 339 (5) | 2 | 13860.9 | 39324.46 | 84.05 |
inception_v4.tf_in1k-train | π§ | 1702 (24) | 1405 (15) | 0 | 40838.7 | 82767.61 | 100.0 |
microsoft/beit-base-patch16-224 | π§ | 793 (21) | 38 (5) | 0 | 12000.3 | 11277.25 | 98.96 |
microsoft/beit-base-patch16-224-train | π§ | 2229 (34) | 274 (17) | 44 | 76434.1 | 38248.47 | 100.0 |
microsoft/beit-large-patch16-224 | π§ | 1573 (21) | 74 (5) | 0 | 47478.3 | 32925.27 | 81.9 |
microsoft/beit-large-patch16-224-train | π§ | 4437 (34) | 538 (17) | 92 | 531050 | 90227.68 | 100.0 |
mixer_b16_224.goog_in21k | π§ | 356 (11) | 1 (1) | 0 | 14887.4 | 13215.62 | 50.34 |
mixer_b16_224.goog_in21k-train | π§ | 959 (18) | 102 (7) | 0 | 58528.8 | 28017.77 | 100.0 |
mobilenet_v2 | π§ | 154 (9) | 104 (2) | 0 | 815.02 | 22158.14 | 4.39 |
mobilenet_v3_large | π§ | 188 (11) | 129 (3) | 0 | 763.33 | 13727.76 | 12.08 |
mobilenet_v3_small | π§ | 158 (11) | 105 (3) | 0 | 457.83 | 16083.3 | 27.12 |
mobilenetv1_100.ra4_e3600_r224_in1k | π§ | 85 (7) | 54 (2) | 0 | 1602.11 | 14626.67 | 69.45 |
mobilenetv1_100.ra4_e3600_r224_in1k-train | π§ | 231 (15) | 192 (9) | 0 | 3565.23 | 20285.83 | 100.0 |
regnet_x_16gf | π§ | 235 (8) | 142 (2) | 0 | 14831.2 | 30015.9 | 70.71 |
regnet_x_1_6gf | π§ | 195 (8) | 118 (2) | 0 | 1901.95 | 12007.59 | 99.97 |
regnet_x_32gf | π§ | 245 (8) | 148 (2) | 0 | 28804.1 | 40075.64 | 78.09 |
regnet_x_3_2gf | π§ | 265 (8) | 160 (2) | 0 | 3364.35 | 14019.31 | 99.96 |
regnet_x_400mf | π§ | 235 (8) | 142 (2) | 0 | 844.77 | 9243.87 | 9.37 |
regnet_x_800mf | π§ | 175 (8) | 106 (2) | 0 | 1177.62 | 12757.65 | 99.96 |
regnet_x_8gf | π§ | 245 (8) | 148 (2) | 0 | 7962.29 | 17959.87 | 99.98 |
regnet_y_128gf | π§ | 447 (10) | 226 (2) | 0 | 481865 | 504060.91 | 3.94 |
regnet_y_16gf | π§ | 303 (10) | 154 (2) | 0 | 14927.5 | 39269.93 | 7.17 |
regnet_y_1_6gf | π§ | 447 (10) | 226 (2) | 0 | 2019.98 | 19631.85 | 99.91 |
regnet_y_32gf | π§ | 335 (10) | 170 (2) | 0 | 29455.5 | 49291.43 | -0.94 |
regnet_y_3_2gf | π§ | 351 (10) | 178 (2) | 0 | 3397.27 | 19838.37 | 99.95 |
regnet_y_400mf | π§ | 271 (10) | 138 (2) | 0 | 780.75 | 20414.85 | 0.25 |
regnet_y_800mf | π§ | 239 (10) | 122 (2) | 0 | 1690.96 | 20146.78 | 99.88 |
regnet_y_8gf | π§ | 287 (10) | 146 (2) | 0 | 8322.62 | 22533.33 | 99.96 |
resnet101 | π§ | 346 (9) | 208 (2) | 1 | 8147.66 | 17842.33 | 99.97 |
resnet152 | π§ | 516 (9) | 310 (2) | 1 | 10884.8 | 37512.57 | 76.27 |
resnet18 | π§ | 70 (9) | 40 (2) | 1 | 2263.44 | 11949.56 | 14.68 |
resnet34 | π§ | 126 (9) | 72 (2) | 1 | 4131.55 | 7171.04 | 21.92 |
resnet50 | π§ | 176 (9) | 106 (2) | 1 | 4388.29 | 11753.73 | 4.56 |
resnext101_32x8d | π§ | 346 (9) | 208 (2) | 1 | 15355.6 | 26566.44 | 93.29 |
resnext101_64x4d | π§ | 346 (9) | 208 (2) | 1 | 14505.3 | 25628.04 | 70.29 |
resnext50_32x4d | π§ | 176 (9) | 106 (2) | 1 | 4417.35 | 9683.42 | 78.82 |
retinanet_resnet50_fpn | β | 1107 (32) | N/A | N/A | 2904.36 | N/A | N/A |
retinanet_resnet50_fpn_v2 | β | 617 (33) | N/A | N/A | 2781.8 | N/A | N/A |
speecht5-tts | π§ | 862 (21) | 7 (4) | 2 | 54529.8 | 59186.8 | N/A |
ssd300_vgg16 | β | 387 (32) | N/A | N/A | 3424.1 | N/A | N/A |
ssdlite320_mobilenet_v3_large | β | 575 (34) | N/A | N/A | 570.09 | N/A | N/A |
swin_b | π§ | 1898 (30) | 207 (12) | 13 | 14240 | 68286.9 | 5.02 |
swin_s | π§ | 1898 (30) | 207 (12) | 13 | 8254.57 | 27257.43 | 11.33 |
swin_t | π§ | 968 (30) | 111 (12) | 7 | 4389.06 | 56992.92 | 15.42 |
swin_v2_b | π§ | 2474 (37) | 353 (14) | 11 | 20712.4 | 46453.24 | 8.21 |
swin_v2_s | π§ | 2474 (37) | 353 (14) | 11 | 13051.7 | 32935.09 | 1.56 |
swin_v2_t | π§ | 1256 (37) | 185 (14) | 5 | 8282.66 | 52203.69 | 9.21 |
t5-base | β | 14731 (38) | N/A | N/A | 28333.6 | N/A | N/A |
t5-large | β | 22738 (38) | N/A | N/A | 81614.4 | N/A | N/A |
t5-small | β | 6160 (38) | N/A | N/A | 3931.1 | N/A | N/A |
textattack/albert-base-v2-imdb | π§ | 782 (22) | 14 (3) | 3 | 3013.63 | 9216.27 | 100.0 |
tf_efficientnet_lite0.in1k | π§ | 149 (9) | 103 (3) | 0 | 1427.21 | 31939.76 | -1.81 |
tf_efficientnet_lite0.in1k-train | π§ | 403 (17) | 340 (10) | 0 | 3012.09 | 42221.64 | 100.0 |
tf_efficientnet_lite1.in1k | π§ | 194 (9) | 133 (3) | 0 | 1812.24 | 10946.39 | 0.78 |
tf_efficientnet_lite1.in1k-train | π§ | 523 (17) | 440 (10) | 0 | 3713.48 | 18834.6 | 100.0 |
tf_efficientnet_lite2.in1k | π§ | 194 (9) | 133 (3) | 0 | 2631.34 | 47409.94 | 86.54 |
tf_efficientnet_lite2.in1k-train | π§ | 523 (17) | 440 (10) | 0 | 4533.75 | 34506.79 | 100.0 |
tf_efficientnet_lite3.in1k | π§ | 221 (9) | 151 (3) | 0 | 3015.7 | 21659.65 | 84.44 |
tf_efficientnet_lite3.in1k-train | π§ | 595 (17) | 500 (10) | 0 | 7378.72 | 31472.03 | 100.0 |
tf_efficientnet_lite4.in1k | π§ | 275 (9) | 187 (3) | 0 | 5578.8 | 26924.07 | 86.03 |
tf_efficientnet_lite4.in1k-train | π§ | 739 (17) | 620 (10) | 0 | 17863.8 | 65697.0 | 100.0 |
twmkn9/albert-base-v2-squad2 | β | 783 (23) | 0 (0) | 17 | 3642.52 | 18949.25 | 98.39 |
vgg11 | π§ | 33 (8) | 10 (3) | 5 | 11463 | 33595.13 | 100.0 |
vgg11_bn | π§ | 41 (9) | 18 (4) | 5 | 11687.9 | 14940.06 | 100.0 |
vgg13 | π§ | 37 (8) | 12 (3) | 5 | 18902.1 | 21809.66 | 100.0 |
vgg13_bn | π§ | 47 (9) | 22 (4) | 5 | 18928.9 | 22405.18 | 100.0 |
vgg16 | π§ | 43 (8) | 15 (3) | 5 | 23788.6 | 47729.68 | 100.0 |
vgg16_bn | π§ | 56 (9) | 28 (4) | 5 | 25714.5 | 30115.19 | 100.0 |
vgg19 | π§ | 49 (8) | 18 (3) | 5 | 26135.9 | 27491.7 | 100.0 |
vgg19_bn | π§ | 65 (9) | 34 (4) | 5 | 32792.6 | 37223.46 | 100.0 |
vit_b_16 | π§ | 552 (17) | 26 (4) | 0 | 14730.1 | 39256.02 | 98.97 |
vit_b_32 | π§ | 552 (17) | 26 (4) | 0 | 4988.49 | 17011.2 | 98.45 |
vit_h_14 | π§ | 1452 (17) | 66 (4) | 0 | 761171 | 1107787.83 | 98.96 |
vit_l_16 | π§ | 1092 (17) | 50 (4) | 0 | 50127.4 | 89550.64 | 99.69 |
vit_l_32 | π§ | 1092 (17) | 50 (4) | 0 | 16273.2 | 37108.48 | 98.87 |
wide_resnet101_2 | π§ | 346 (9) | 208 (2) | 1 | 22115.8 | 31919.52 | 3.58 |
wide_resnet50_2 | π§ | 176 (9) | 106 (2) | 1 | 12203 | 23599.93 | 5.52 |
xception71.tf_in1k | π§ | 393 (9) | 292 (2) | 0 | 18560.5 | 39491.38 | 44.8 |
xception71.tf_in1k-train | π§ | 1370 (18) | 1239 (11) | 0 | 60867.9 | 96382.99 | 100.0 |
Model: Name of the model.
Status: Indicates whether the model is β traced / π§ compiled / β
E2E on device.
Torch Ops Before (Unique Ops): The total number of operations used by the model in the original Torch implementation. The number in parenthesis represents the total unique ops.
Torch Ops Remain (Unique Ops): The total number of operations used after conversion to TTNN. The number in parenthesis represents the total unique ops.
To/From Device Ops: The number of to/from_device
operations (data transfer to/from the device).
Original Run Time (ms): Execution time (in seconds) of the model before conversion.
Compiled Run Time (ms): Execution time (in seconds) of the model after conversion.
Accuracy (%): Model accuracy on a predefined test dataset after conversion.
The torch_ttnn
module has a backend
function, which can be used with the torch.compile()
.
import torch
import torch_ttnn
# A torch Module
class FooModule(torch.nn.Module):
...
# Create a module
module = FooModule()
# Compile the module, with ttnn backend
device = ttnn.open_device(device_id=0)
option = torch_ttnn.TorchTtnnOption(device=self.device)
ttnn_module = torch.compile(module, backend=torch_ttnn.backend, options=option)
# Running inference / training
ttnn_module(input_data)
The tracer dump the information of fx graph such as node's op_name and shape.
For example, you can run this script to parse the information
PYTHONPATH=$(pwd) python3 tools/stat_models.py --trace_orig --backward --profile
ls stat/raw
By default, the raw result will be stored at stat/raw
, and you can run this script to generate the report
python3 tools/generate_report.py
ls stat/
Now the stat/
folder have these report
fw_node_count.csv
bw_node_count.csv
fw_total_input_size_dist/
bw_total_input_size_dist/
fw_total_output_size_dist/
bw_total_output_size_dist/
profile/
The node_count.csv
show the node with op_type
appear in the fx graph. This report can help analyze the frequency of op type appear in the graph.
The *_total_*_size_dist/
statistics the op_type
's input/output_size distribution from all fx graph recored in stat/raw
. This report can help analyze the memory footprint durning the calculation of op_type
.
Notice: the default input_shapes
in tools/stat_torchvision.py
is [1,3,224,224]
, which has dependency with *_total_*_size_dist/
report.
Notice: the aten ir interface is in there
The profile/
is the tools provided by pytorch, you can open it by the url: chrome://tracing
During development, you may want to use the torch-ttnn package for testing. In order to do that, you can install the torch-ttnn package in "editable" mode with
pip install -e .
Now, you can utilize torch_ttnn
in your Python code. Any modifications you make to the torch_ttnn
package will take effect immediately, eliminating the need for constant reinstallation via pip.
For developers want to deploy the wheel, you can build the wheel file with
python -m build
Then you can upload the .whl
file to the PyPI (Python Package Index).
To run transformer model with ttnn backend, run:
PYTHONPATH="$TT_METAL_HOME:$(pwd)" python3 tools/run_transformers.py --model "phiyodr/bert-large-finetuned-squad2" --backend torch_ttnn
You can also substitute the backend with torch_stat
to run a reference comparison.