huggingface / optimum

🚀 Accelerate training and inference of 🤗 Transformers and 🤗 Diffusers with easy to use hardware optimization tools
https://huggingface.co/docs/optimum/main/
Apache License 2.0
2.31k stars 404 forks source link

LLM or any model cannot be exported to onnx with opset 9 #1092

Open escorciav opened 1 year ago

escorciav commented 1 year ago

Feature request

Export to onnx fails for opset 9 with T5

Motivation

ONNX opset 9 is required by SNPE, Qualcomm SDK accelerator. By supporting ONNX opset 9, we will unleash ML on the edge & on mobile phones.

Your contribution

I m willing to give a hand, but help is needed as I'm not familiar with all the abstractions.

Details

$ optimum-cli export onnx --model t5-base checkpoints/t5-base_onnx/ --opset 9 --framework pt | tee log.txt
Automatic task detection to text2text-generation-with-past.
Traceback (most recent call last):
  File "/home/bin/miniconda3/envs/on-device-llm/bin/optimum-cli", line 8, in <module>
    sys.exit(main())
  File "/home/bin/miniconda3/envs/on-device-llm/lib/python3.10/site-packages/optimum/commands/optimum_cli.py", line 163, in main
    service.run()
  File "/home/bin/miniconda3/envs/on-device-llm/lib/python3.10/site-packages/optimum/commands/export/onnx.py", line 203, in run
    main_export(
  File "/home/bin/miniconda3/envs/on-device-llm/lib/python3.10/site-packages/optimum/exporters/onnx/__main__.py", line 238, in main_export
    raise ValueError(
ValueError: Opset 9 is not sufficient to export t5. At least  13 is required.

Environmet details

Some of the relevant dependencies

cmake==3.26.4
numpy==1.24.3
nvidia-cublas-cu11==11.10.3.66
nvidia-cuda-cupti-cu11==11.7.101
nvidia-cuda-nvrtc-cu11==11.7.99
nvidia-cuda-runtime-cu11==11.7.99
nvidia-cudnn-cu11==8.5.0.96
nvidia-cufft-cu11==10.9.0.58
nvidia-curand-cu11==10.2.10.91
nvidia-cusolver-cu11==11.4.0.1
nvidia-cusparse-cu11==11.7.4.91
nvidia-nccl-cu11==2.14.3
nvidia-nvtx-cu11==11.7.91
onnx==1.13.1
onnxconverter-common==1.13.0
onnxruntime==1.15.0
onnxruntime-tools==1.7.0
optimum==1.8.6
prompt-toolkit==3.0.38
protobuf==3.20.2
psutil==5.9.5
ptyprocess==0.7.0
pure-eval==0.2.2
py-cpuinfo==9.0.0
py3nvml==0.2.7
pytorch-triton==2.1.0+9820899b38
rich==13.4.1
safetensors==0.3.1
sentencepiece==0.1.99
sympy==1.11.1
tf2onnx==1.14.0
timm==0.9.2
tokenizers==0.13.3
transformers==4.29.2
triton==2.0.0
escorciav commented 1 year ago

After trial and error with the following file just before raising the error due to opset

I found out that the minimal opset to export T5 without error is 12 not 13 :). I'll post the issue in SNPE & AIMET such that both communities hammer at this issue together :mechanical_arm: . In the past, I usually had to refactor (rewrite) the modules such that they are compatible with previous opset.

if 't5' in model.__class__.__name__.lower() and opset == 9:
    print('opset 9 did not work')
    opset = onnx_config.DEFAULT_ONNX_OPSET = 12
if opset < onnx_config.DEFAULT_ONNX_OPSET:
    raise ValueError(
        f"Opset {opset} is not sufficient to export {model.config.model_type}. "
        f"At least  {onnx_config.DEFAULT_ONNX_OPSET} is required."
    )
escorciav commented 1 year ago

Hi @michaelbenayoun , do you have any suggestions on how to provide support for older opset versions of onnx?

  1. In the past, I usually refactor (rewrite) the unsupported modules such that they are compatible with a given opset.
  2. Is this an instance of a custom model export as in the forum?
  3. I pretty much need help unboxing all the abstractions in HF-transformers & optimum :)

I believe you can reproduce the error with the following command

optimum-cli export onnx --task text2text-generation-with-past --model t5-base checkpoints/t5-base_onnx/ --opset 9 --framework pt --optimize O3 --batch_size 1 --sequence_length 512 --atol 0.0001 --cache_dir ~/.cache/huggingface/hub

You should get an error like this

[HACK @ Victor Escorcia]. opset 9 is not supported 2023-06-08. By trial & error optset=12 works
/apps/bin/miniconda3/envs/on-device-llm/lib/python3.10/site-packages/transformers/models/t5/tokenization_t5_fast.py:155: FutureWarning: This tokenizer was incorrectly instantiated with a model max length of 512 which will be corrected in Transformers v5.
For now, this behavior is kept to avoid breaking backwards compatibility when padding/encoding with `truncation is True`.
- Be aware that you SHOULD NOT rely on t5-base automatically truncating your input to 512 when padding/encoding.
- If you want to encode/pad to sequences longer than 512 you can either instantiate this tokenizer with `model_max_length` or pass `max_length` when encoding/padding.
- To avoid this warning, please instantiate this tokenizer with `model_max_length` set to your preferred value.
  warnings.warn(
Using framework PyTorch: 2.0.1+cu117
Overriding 1 configuration item(s)
        - use_cache -> False
/apps/bin/miniconda3/envs/on-device-llm/lib/python3.10/site-packages/torch/onnx/utils.py:1636: UserWarning: The exported ONNX model failed ONNX shape inference.The model will not be executable by the ONNX Runtime.If this is unintended and you believe there is a bug,please report an issue at https://github.com/pytorch/pytorch/issues.Error reported by strict ONNX shape inference: [ShapeInferenceError] (op_type:Min, node name: /block.0/layer.0/SelfAttention/Min): data_0 typestr: T, has unsupported type: tensor(int64) (Triggered internally at ../torch/csrc/jit/serialization/export.cpp:1407.)
  _C._check_onnx_proto(proto)
============= Diagnostic Run torch.onnx.export version 2.0.1+cu117 =============
verbose: False, log level: Level.ERROR
======================= 0 NONE 0 NOTE 0 WARNING 0 ERROR ========================

Traceback (most recent call last):
  File "/apps/bin/miniconda3/envs/on-device-llm/bin/optimum-cli", line 8, in <module>
    sys.exit(main())
  File "/apps/bin/miniconda3/envs/on-device-llm/lib/python3.10/site-packages/optimum/commands/optimum_cli.py", line 163, in main
    service.run()
  File "/apps/bin/miniconda3/envs/on-device-llm/lib/python3.10/site-packages/optimum/commands/export/onnx.py", line 208, in run
    main_export(
  File "/apps/bin/miniconda3/envs/on-device-llm/lib/python3.10/site-packages/optimum/exporters/onnx/__main__.py", line 303, in main_export
    _, onnx_outputs = export_models(
  File "/apps/bin/miniconda3/envs/on-device-llm/lib/python3.10/site-packages/optimum/exporters/onnx/convert.py", line 609, in export_models
    export(
  File "/apps/bin/miniconda3/envs/on-device-llm/lib/python3.10/site-packages/optimum/exporters/onnx/convert.py", line 714, in export
    config.fix_dynamic_axes(output, device=device, input_shapes=input_shapes, dtype=dtype)
  File "/apps/bin/miniconda3/envs/on-device-llm/lib/python3.10/site-packages/optimum/exporters/onnx/base.py", line 255, in fix_dynamic_axes
    session = InferenceSession(model_path.as_posix(), providers=providers, sess_options=session_options)
  File "/apps/bin/miniconda3/envs/on-device-llm/lib/python3.10/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 383, in __init__
    self._create_inference_session(providers, provider_options, disabled_optimizers)
  File "/apps/bin/miniconda3/envs/on-device-llm/lib/python3.10/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 424, in _create_inference_session
    sess = C.InferenceSession(session_options, self._model_path, True, self._read_config_from_model)
onnxruntime.capi.onnxruntime_pybind11_state.InvalidGraph: [ONNXRuntimeError] : 10 : INVALID_GRAPH : Load model from checkpoints/t5-base_onnx/encoder_model.onnx failed:This is an invalid model. Type Error: Type 'tensor(int64)' of input parameter (/block.0/layer.0/SelfAttention/Add_1_output_0) of operator (Min) in node (/block.0/layer.0/SelfAttention/Min) is invalid.
escorciav commented 1 year ago

As I suspect the Traceback isn't that useful. I believe the error goes all the way down to an incorrect ONNX model export which happens in export_pytorch

After enabling the verbose export of onnx,

I get something like: ``` /apps/bin/miniconda3/envs/on-device-llm/lib/python3.10/site-packages/transformers/models/t5/tokenization_t5_fast.py:155: FutureWarning: This tokenizer was incorrectly instantiated with a model max length of 512 which will be corrected in Transformers v5. For now, this behavior is kept to avoid breaking backwards compatibility when padding/encoding with `truncation is True`. - Be aware that you SHOULD NOT rely on t5-base automatically truncating your input to 512 when padding/encoding. - If you want to encode/pad to sequences longer than 512 you can either instantiate this tokenizer with `model_max_length` or pass `max_length` when encoding/padding. - To avoid this warning, please instantiate this tokenizer with `model_max_length` set to your preferred value. warnings.warn( Using framework PyTorch: 2.0.1+cu117 Overriding 1 configuration item(s) - use_cache -> False Exported graph: graph(%input_ids : Long(*, *, strides=[16, 1], requires_grad=0, device=cpu), %attention_mask : Long(*, *, strides=[16, 1], requires_grad=0, device=cpu), %embed_tokens.weight : Float(32128, 768, strides=[768, 1], requires_grad=1, device=cpu), %block.0.layer.0.SelfAttention.relative_attention_bias.weight : Float(32, 12, strides=[12, 1], requires_grad=1, device=cpu), %block.0.layer.0.layer_norm.weight : Float(768, strides=[1], requires_grad=1, device=cpu), %block.0.layer.1.layer_norm.weight : Float(768, strides=[1], requires_grad=1, device=cpu), %block.1.layer.0.layer_norm.weight : Float(768, strides=[1], requires_grad=1, device=cpu), %block.1.layer.1.layer_norm.weight : Float(768, strides=[1], requires_grad=1, device=cpu), %block.2.layer.0.layer_norm.weight : Float(768, strides=[1], requires_grad=1, device=cpu), %block.2.layer.1.layer_norm.weight : Float(768, strides=[1], requires_grad=1, device=cpu), %block.3.layer.0.layer_norm.weight : Float(768, strides=[1], requires_grad=1, device=cpu), %block.3.layer.1.layer_norm.weight : Float(768, strides=[1], requires_grad=1, device=cpu), %block.4.layer.0.layer_norm.weight : Float(768, strides=[1], requires_grad=1, device=cpu), %block.4.layer.1.layer_norm.weight : Float(768, strides=[1], requires_grad=1, device=cpu), %block.5.layer.0.layer_norm.weight : Float(768, strides=[1], requires_grad=1, device=cpu), %block.5.layer.1.layer_norm.weight : Float(768, strides=[1], requires_grad=1, device=cpu), %block.6.layer.0.layer_norm.weight : Float(768, strides=[1], requires_grad=1, device=cpu), %block.6.layer.1.layer_norm.weight : Float(768, strides=[1], requires_grad=1, device=cpu), %block.7.layer.0.layer_norm.weight : Float(768, strides=[1], requires_grad=1, device=cpu), %block.7.layer.1.layer_norm.weight : Float(768, strides=[1], requires_grad=1, device=cpu), %block.8.layer.0.layer_norm.weight : Float(768, strides=[1], requires_grad=1, device=cpu), %block.8.layer.1.layer_norm.weight : Float(768, strides=[1], requires_grad=1, device=cpu), %block.9.layer.0.layer_norm.weight : Float(768, strides=[1], requires_grad=1, device=cpu), %block.9.layer.1.layer_norm.weight : Float(768, strides=[1], requires_grad=1, device=cpu), %block.10.layer.0.layer_norm.weight : Float(768, strides=[1], requires_grad=1, device=cpu), %block.10.layer.1.layer_norm.weight : Float(768, strides=[1], requires_grad=1, device=cpu), %block.11.layer.0.layer_norm.weight : Float(768, strides=[1], requires_grad=1, device=cpu), %block.11.layer.1.layer_norm.weight : Float(768, strides=[1], requires_grad=1, device=cpu), %final_layer_norm.weight : Float(768, strides=[1], requires_grad=1, device=cpu), %onnx::MatMul_1042 : Float(768, 768, strides=[1, 768], requires_grad=0, device=cpu), %onnx::MatMul_1052 : Float(768, 768, strides=[1, 768], requires_grad=0, device=cpu), %onnx::MatMul_1053 : Float(768, 768, strides=[1, 768], requires_grad=0, device=cpu), %onnx::MatMul_1056 : Float(768, 768, strides=[1, 768], requires_grad=0, device=cpu), %onnx::MatMul_1058 : Float(768, 3072, strides=[1, 768], requires_grad=0, device=cpu), %onnx::MatMul_1059 : Float(3072, 768, strides=[1, 3072], requires_grad=0, device=cpu), %onnx::MatMul_1061 : Float(768, 768, strides=[1, 768], requires_grad=0, device=cpu), %onnx::MatMul_1071 : Float(768, 768, strides=[1, 768], requires_grad=0, device=cpu), %onnx::MatMul_1072 : Float(768, 768, strides=[1, 768], requires_grad=0, device=cpu), %onnx::MatMul_1075 : Float(768, 768, strides=[1, 768], requires_grad=0, device=cpu), %onnx::MatMul_1077 : Float(768, 3072, strides=[1, 768], requires_grad=0, device=cpu), %onnx::MatMul_1078 : Float(3072, 768, strides=[1, 3072], requires_grad=0, device=cpu), %onnx::MatMul_1080 : Float(768, 768, strides=[1, 768], requires_grad=0, device=cpu), %onnx::MatMul_1090 : Float(768, 768, strides=[1, 768], requires_grad=0, device=cpu), %onnx::MatMul_1091 : Float(768, 768, strides=[1, 768], requires_grad=0, device=cpu), %onnx::MatMul_1094 : Float(768, 768, strides=[1, 768], requires_grad=0, device=cpu), %onnx::MatMul_1096 : Float(768, 3072, strides=[1, 768], requires_grad=0, device=cpu), %onnx::MatMul_1097 : Float(3072, 768, strides=[1, 3072], requires_grad=0, device=cpu), %onnx::MatMul_1099 : Float(768, 768, strides=[1, 768], requires_grad=0, device=cpu), %onnx::MatMul_1109 : Float(768, 768, strides=[1, 768], requires_grad=0, device=cpu), %onnx::MatMul_1110 : Float(768, 768, strides=[1, 768], requires_grad=0, device=cpu), %onnx::MatMul_1113 : Float(768, 768, strides=[1, 768], requires_grad=0, device=cpu), %onnx::MatMul_1115 : Float(768, 3072, strides=[1, 768], requires_grad=0, device=cpu), %onnx::MatMul_1116 : Float(3072, 768, strides=[1, 3072], requires_grad=0, device=cpu), %onnx::MatMul_1118 : Float(768, 768, strides=[1, 768], requires_grad=0, device=cpu), %onnx::MatMul_1128 : Float(768, 768, strides=[1, 768], requires_grad=0, device=cpu), %onnx::MatMul_1129 : Float(768, 768, strides=[1, 768], requires_grad=0, device=cpu), %onnx::MatMul_1132 : Float(768, 768, strides=[1, 768], requires_grad=0, device=cpu), %onnx::MatMul_1134 : Float(768, 3072, strides=[1, 768], requires_grad=0, device=cpu), %onnx::MatMul_1135 : Float(3072, 768, strides=[1, 3072], requires_grad=0, device=cpu), %onnx::MatMul_1137 : Float(768, 768, strides=[1, 768], requires_grad=0, device=cpu), %onnx::MatMul_1147 : Float(768, 768, strides=[1, 768], requires_grad=0, device=cpu), %onnx::MatMul_1148 : Float(768, 768, strides=[1, 768], requires_grad=0, device=cpu), %onnx::MatMul_1151 : Float(768, 768, strides=[1, 768], requires_grad=0, device=cpu), %onnx::MatMul_1153 : Float(768, 3072, strides=[1, 768], requires_grad=0, device=cpu), %onnx::MatMul_1154 : Float(3072, 768, strides=[1, 3072], requires_grad=0, device=cpu), %onnx::MatMul_1156 : Float(768, 768, strides=[1, 768], requires_grad=0, device=cpu), %onnx::MatMul_1166 : Float(768, 768, strides=[1, 768], requires_grad=0, device=cpu), %onnx::MatMul_1167 : Float(768, 768, strides=[1, 768], requires_grad=0, device=cpu), %onnx::MatMul_1170 : Float(768, 768, strides=[1, 768], requires_grad=0, device=cpu), %onnx::MatMul_1172 : Float(768, 3072, strides=[1, 768], requires_grad=0, device=cpu), %onnx::MatMul_1173 : Float(3072, 768, strides=[1, 3072], requires_grad=0, device=cpu), %onnx::MatMul_1175 : Float(768, 768, strides=[1, 768], requires_grad=0, device=cpu), %onnx::MatMul_1185 : Float(768, 768, strides=[1, 768], requires_grad=0, device=cpu), %onnx::MatMul_1186 : Float(768, 768, strides=[1, 768], requires_grad=0, device=cpu), %onnx::MatMul_1189 : Float(768, 768, strides=[1, 768], requires_grad=0, device=cpu), %onnx::MatMul_1191 : Float(768, 3072, strides=[1, 768], requires_grad=0, device=cpu), %onnx::MatMul_1192 : Float(3072, 768, strides=[1, 3072], requires_grad=0, device=cpu), %onnx::MatMul_1194 : Float(768, 768, strides=[1, 768], requires_grad=0, device=cpu), %onnx::MatMul_1204 : Float(768, 768, strides=[1, 768], requires_grad=0, device=cpu), %onnx::MatMul_1205 : Float(768, 768, strides=[1, 768], requires_grad=0, device=cpu), %onnx::MatMul_1208 : Float(768, 768, strides=[1, 768], requires_grad=0, device=cpu), %onnx::MatMul_1210 : Float(768, 3072, strides=[1, 768], requires_grad=0, device=cpu), %onnx::MatMul_1211 : Float(3072, 768, strides=[1, 3072], requires_grad=0, device=cpu), %onnx::MatMul_1213 : Float(768, 768, strides=[1, 768], requires_grad=0, device=cpu), %onnx::MatMul_1223 : Float(768, 768, strides=[1, 768], requires_grad=0, device=cpu), %onnx::MatMul_1224 : Float(768, 768, strides=[1, 768], requires_grad=0, device=cpu), %onnx::MatMul_1227 : Float(768, 768, strides=[1, 768], requires_grad=0, device=cpu), %onnx::MatMul_1229 : Float(768, 3072, strides=[1, 768], requires_grad=0, device=cpu), %onnx::MatMul_1230 : Float(3072, 768, strides=[1, 3072], requires_grad=0, device=cpu), %onnx::MatMul_1232 : Float(768, 768, strides=[1, 768], requires_grad=0, device=cpu), %onnx::MatMul_1242 : Float(768, 768, strides=[1, 768], requires_grad=0, device=cpu), %onnx::MatMul_1243 : Float(768, 768, strides=[1, 768], requires_grad=0, device=cpu), %onnx::MatMul_1246 : Float(768, 768, strides=[1, 768], requires_grad=0, device=cpu), %onnx::MatMul_1248 : Float(768, 3072, strides=[1, 768], requires_grad=0, device=cpu), %onnx::MatMul_1249 : Float(3072, 768, strides=[1, 3072], requires_grad=0, device=cpu), %onnx::MatMul_1251 : Float(768, 768, strides=[1, 768], requires_grad=0, device=cpu), %onnx::MatMul_1261 : Float(768, 768, strides=[1, 768], requires_grad=0, device=cpu), %onnx::MatMul_1262 : Float(768, 768, strides=[1, 768], requires_grad=0, device=cpu), %onnx::MatMul_1265 : Float(768, 768, strides=[1, 768], requires_grad=0, device=cpu), %onnx::MatMul_1267 : Float(768, 3072, strides=[1, 768], requires_grad=0, device=cpu), %onnx::MatMul_1268 : Float(3072, 768, strides=[1, 3072], requires_grad=0, device=cpu)): %/Shape_output_0 : Long(2, strides=[1], device=cpu) = onnx::Shape[onnx_name="/Shape"](%input_ids), scope: transformers.models.t5.modeling_t5.T5Stack:: # /apps/bin/miniconda3/envs/on-device-llm/lib/python3.10/site-packages/transformers/models/t5/modeling_t5.py:975:0 %/Constant_output_0 : Long(device=cpu) = onnx::Constant[value={1}, onnx_name="/Constant"](), scope: transformers.models.t5.modeling_t5.T5Stack:: # /apps/bin/miniconda3/envs/on-device-llm/lib/python3.10/site-packages/transformers/models/t5/modeling_t5.py:975:0 %/Gather_output_0 : Long(device=cpu) = onnx::Gather[axis=0, onnx_name="/Gather"](%/Shape_output_0, %/Constant_output_0), scope: transformers.models.t5.modeling_t5.T5Stack:: # /apps/bin/miniconda3/envs/on-device-llm/lib/python3.10/site-packages/transformers/models/t5/modeling_t5.py:975:0 %/Constant_1_output_0 : Long(1, strides=[1], requires_grad=0, device=cpu) = onnx::Constant[value={-1}, onnx_name="/Constant_1"](), scope: transformers.models.t5.modeling_t5.T5Stack:: %/Unsqueeze_output_0 : Long(1, strides=[1], device=cpu) = onnx::Unsqueeze[axes=[0], onnx_name="/Unsqueeze"](%/Gather_output_0), scope: transformers.models.t5.modeling_t5.T5Stack:: %/Concat_output_0 : Long(2, strides=[1], device=cpu) = onnx::Concat[axis=0, onnx_name="/Concat"](%/Constant_1_output_0, %/Unsqueeze_output_0), scope: transformers.models.t5.modeling_t5.T5Stack:: # /apps/bin/miniconda3/envs/on-device-llm/lib/python3.10/site-packages/transformers/models/t5/modeling_t5.py:976:0 %/Reshape_output_0 : Long(*, *, strides=[16, 1], requires_grad=0, device=cpu) = onnx::Reshape[onnx_name="/Reshape"](%input_ids, %/Concat_output_0), scope: transformers.models.t5.modeling_t5.T5Stack:: # /apps/bin/miniconda3/envs/on-device-llm/lib/python3.10/site-packages/transformers/models/t5/modeling_t5.py:976:0 %/embed_tokens/Gather_output_0 : Float(*, *, 768, strides=[12288, 768, 1], requires_grad=0, device=cpu) = onnx::Gather[onnx_name="/embed_tokens/Gather"](%embed_tokens.weight, %/Reshape_output_0), scope: transformers.models.t5.modeling_t5.T5Stack::/torch.nn.modules.sparse.Embedding::embed_tokens # /apps/bin/miniconda3/envs/on-device-llm/lib/python3.10/site-packages/torch/nn/functional.py:2210:0 %/Unsqueeze_1_output_0 : Long(*, 1, *, strides=[16, 16, 1], requires_grad=0, device=cpu) = onnx::Unsqueeze[axes=[1], onnx_name="/Unsqueeze_1"](%attention_mask), scope: transformers.models.t5.modeling_t5.T5Stack:: # /apps/bin/miniconda3/envs/on-device-llm/lib/python3.10/site-packages/transformers/modeling_utils.py:882:0 %/Unsqueeze_2_output_0 : Long(*, 1, 1, *, strides=[16, 16, 16, 1], requires_grad=0, device=cpu) = onnx::Unsqueeze[axes=[2], onnx_name="/Unsqueeze_2"](%/Unsqueeze_1_output_0), scope: transformers.models.t5.modeling_t5.T5Stack:: # /apps/bin/miniconda3/envs/on-device-llm/lib/python3.10/site-packages/transformers/modeling_utils.py:882:0 %/Cast_output_0 : Float(*, 1, 1, *, strides=[16, 16, 16, 1], requires_grad=0, device=cpu) = onnx::Cast[to=1, onnx_name="/Cast"](%/Unsqueeze_2_output_0), scope: transformers.models.t5.modeling_t5.T5Stack:: # /apps/bin/miniconda3/envs/on-device-llm/lib/python3.10/site-packages/transformers/modeling_utils.py:893:0 %/Constant_2_output_0 : Float(requires_grad=0, device=cpu) = onnx::Constant[value={1}, onnx_name="/Constant_2"](), scope: transformers.models.t5.modeling_t5.T5Stack:: # /apps/bin/miniconda3/envs/on-device-llm/lib/python3.10/site-packages/torch/_tensor.py:848:0 %/Sub_output_0 : Float(*, 1, 1, *, strides=[16, 16, 16, 1], requires_grad=0, device=cpu) = onnx::Sub[onnx_name="/Sub"](%/Constant_2_output_0, %/Cast_output_0), scope: transformers.models.t5.modeling_t5.T5Stack:: # /apps/bin/miniconda3/envs/on-device-llm/lib/python3.10/site-packages/torch/_tensor.py:848:0 %/Constant_3_output_0 : Float(requires_grad=0, device=cpu) = onnx::Constant[value={-3.40282e+38}, onnx_name="/Constant_3"](), scope: transformers.models.t5.modeling_t5.T5Stack:: # /apps/bin/miniconda3/envs/on-device-llm/lib/python3.10/site-packages/transformers/modeling_utils.py:894:0 %/Mul_output_0 : Float(*, 1, 1, *, strides=[16, 16, 16, 1], requires_grad=0, device=cpu) = onnx::Mul[onnx_name="/Mul"](%/Sub_output_0, %/Constant_3_output_0), scope: transformers.models.t5.modeling_t5.T5Stack:: # /apps/bin/miniconda3/envs/on-device-llm/lib/python3.10/site-packages/transformers/modeling_utils.py:894:0 %/block.0/layer.0/layer_norm/Cast_output_0 : Float(*, *, 768, strides=[12288, 768, 1], requires_grad=0, device=cpu) = onnx::Cast[to=1, onnx_name="/block.0/layer.0/layer_norm/Cast"](%/embed_tokens/Gather_output_0), scope: transformers.models.t5.modeling_t5.T5Stack::/transformers.models.t5.modeling_t5.T5Block::block.0/transformers.models.t5.modeling_t5.T5LayerSelfAttention::layer.0/transformers.models.t5.modeling_t5.T5LayerNorm::layer_norm # /apps/bin/miniconda3/envs/on-device-llm/lib/python3.10/site-packages/transformers/models/t5/modeling_t5.py:253:0 %/block.0/layer.0/layer_norm/Constant_output_0 : Float(requires_grad=0, device=cpu) = onnx::Constant[value={2}, onnx_name="/block.0/layer.0/layer_norm/Constant"](), scope: transformers.models.t5.modeling_t5.T5Stack::/transformers.models.t5.modeling_t5.T5Block::block.0/transformers.models.t5.modeling_t5.T5LayerSelfAttention::layer.0/transformers.models.t5.modeling_t5.T5LayerNorm::layer_norm # /apps/bin/miniconda3/envs/on-device-llm/lib/python3.10/site-packages/transformers/models/t5/modeling_t5.py:253:0 %/block.0/layer.0/layer_norm/Pow_output_0 : Float(*, *, 768, strides=[12288, 768, 1], requires_grad=0, device=cpu) = onnx::Pow[onnx_name="/block.0/layer.0/layer_norm/Pow"](%/block.0/layer.0/layer_norm/Cast_output_0, %/block.0/layer.0/layer_norm/Constant_output_0), scope: transformers.models.t5.modeling_t5.T5Stack::/transformers.models.t5.modeling_t5.T5Block::block.0/transformers.models.t5.modeling_t5.T5LayerSelfAttention::layer.0/transformers.models.t5.modeling_t5.T5LayerNorm::layer_norm # /apps/bin/miniconda3/envs/on-device-llm/lib/python3.10/site-packages/transformers/models/t5/modeling_t5.py:253:0 %/block.0/layer.0/layer_norm/ReduceMean_output_0 : Float(*, *, 1, strides=[16, 1, 1], requires_grad=0, device=cpu) = onnx::ReduceMean[axes=[-1], keepdims=1, onnx_name="/block.0/layer.0/layer_norm/ReduceMean"](%/block.0/layer.0/layer_norm/Pow_output_0), scope: transformers.models.t5.modeling_t5.T5Stack::/transformers.models.t5.modeling_t5.T5Block::block.0/transformers.models.t5.modeling_t5.T5LayerSelfAttention::layer.0/transformers.models.t5.modeling_t5.T5LayerNorm::layer_norm # /apps/bin/miniconda3/envs/on-device-llm/lib/python3.10/site-packages/transformers/models/t5/modeling_t5.py:253:0 %/block.0/layer.0/layer_norm/Constant_1_output_0 : Float(requires_grad=0, device=cpu) = onnx::Constant[value={1e-06}, onnx_name="/block.0/layer.0/layer_norm/Constant_1"](), scope: transformers.models.t5.modeling_t5.T5Stack::/transformers.models.t5.modeling_t5.T5Block::block.0/transformers.models.t5.modeling_t5.T5LayerSelfAttention::layer.0/transformers.models.t5.modeling_t5.T5LayerNorm::layer_norm # /apps/bin/miniconda3/envs/on-device-llm/lib/python3.10/site-packages/transformers/models/t5/modeling_t5.py:254:0 %/block.0/layer.0/layer_norm/Add_output_0 : Float(*, *, 1, strides=[16, 1, 1], requires_grad=0, device=cpu) = onnx::Add[onnx_name="/block.0/layer.0/layer_norm/Add"](%/block.0/layer.0/layer_norm/ReduceMean_output_0, %/block.0/layer.0/layer_norm/Constant_1_output_0), scope: transformers.models.t5.modeling_t5.T5Stack::/transformers.models.t5.modeling_t5.T5Block::block.0/transformers.models.t5.modeling_t5.T5LayerSelfAttention::layer.0/transformers.models.t5.modeling_t5.T5LayerNorm::layer_norm # /apps/bin/miniconda3/envs/on-device-llm/lib/python3.10/site-packages/transformers/models/t5/modeling_t5.py:254:0 %/block.0/layer.0/layer_norm/Sqrt_output_0 : Float(*, *, 1, device=cpu) = onnx::Sqrt[onnx_name="/block.0/layer.0/layer_norm/Sqrt"](%/block.0/layer.0/layer_norm/Add_output_0), scope: transformers.models.t5.modeling_t5.T5Stack::/transformers.models.t5.modeling_t5.T5Block::block.0/transformers.models.t5.modeling_t5.T5LayerSelfAttention::layer.0/transformers.models.t5.modeling_t5.T5LayerNorm::layer_norm # /apps/bin/miniconda3/envs/on-device-llm/lib/python3.10/site-packages/transformers/models/t5/modeling_t5.py:254:0 %/block.0/layer.0/layer_norm/Constant_2_output_0 : Float(requires_grad=0, device=cpu) = onnx::Constant[value={1}, onnx_name="/block.0/layer.0/layer_norm/Constant_2"](), scope: transformers.models.t5.modeling_t5.T5Stack::/transformers.models.t5.modeling_t5.T5Block::block.0/transformers.models.t5.modeling_t5.T5LayerSelfAttention::layer.0/transformers.models.t5.modeling_t5.T5LayerNorm::layer_norm # /apps/bin/miniconda3/envs/on-device-llm/lib/python3.10/site-packages/transformers/models/t5/modeling_t5.py:254:0 %/block.0/layer.0/layer_norm/Div_output_0 : Float(*, *, 1, strides=[16, 1, 1], requires_grad=0, device=cpu) = onnx::Div[onnx_name="/block.0/layer.0/layer_norm/Div"](%/block.0/layer.0/layer_norm/Constant_2_output_0, %/block.0/layer.0/layer_norm/Sqrt_output_0), scope: transformers.models.t5.modeling_t5.T5Stack::/transformers.models.t5.modeling_t5.T5Block::block.0/transformers.models.t5.modeling_t5.T5LayerSelfAttention::layer.0/transformers.models.t5.modeling_t5.T5LayerNorm::layer_norm # /apps/bin/miniconda3/envs/on-device-llm/lib/python3.10/site-packages/transformers/models/t5/modeling_t5.py:254:0 %/block.0/layer.0/layer_norm/Mul_output_0 : Float(*, *, 768, strides=[12288, 768, 1], requires_grad=0, device=cpu) = onnx::Mul[onnx_name="/block.0/layer.0/layer_norm/Mul"](%/block.0/layer.0/layer_norm/Cast_output_0, %/block.0/layer.0/layer_norm/Div_output_0), scope: transformers.models.t5.modeling_t5.T5Stack::/transformers.models.t5.modeling_t5.T5Block::block.0/transformers.models.t5.modeling_t5.T5LayerSelfAttention::layer.0/transformers.models.t5.modeling_t5.T5LayerNorm::layer_norm # /apps/bin/miniconda3/envs/on-device-llm/lib/python3.10/site-packages/transformers/models/t5/modeling_t5.py:254:0 %/block.0/layer.0/layer_norm/Mul_1_output_0 : Float(*, *, 768, strides=[12288, 768, 1], requires_grad=0, device=cpu) = onnx::Mul[onnx_name="/block.0/layer.0/layer_norm/Mul_1"](%block.0.layer.0.layer_norm.weight, %/block.0/layer.0/layer_norm/Mul_output_0), scope: transformers.models.t5.modeling_t5.T5Stack::/transformers.models.t5.modeling_t5.T5Block::block.0/transformers.models.t5.modeling_t5.T5LayerSelfAttention::layer.0/transformers.models.t5.modeling_t5.T5LayerNorm::layer_norm # /apps/bin/miniconda3/envs/on-device-llm/lib/python3.10/site-packages/transformers/models/t5/modeling_t5.py:260:0 %/block.0/layer.0/SelfAttention/Shape_output_0 : Long(3, strides=[1], device=cpu) = onnx::Shape[onnx_name="/block.0/layer.0/SelfAttention/Shape"](%/block.0/layer.0/layer_norm/Mul_1_output_0), scope: transformers.models.t5.modeling_t5.T5Stack::/transformers.models.t5.modeling_t5.T5Block::block.0/transformers.models.t5.modeling_t5.T5LayerSelfAttention::layer.0/transformers.models.t5.modeling_t5.T5Attention::SelfAttention # /apps/bin/miniconda3/envs/on-device-llm/lib/python3.10/site-packages/transformers/models/t5/modeling_t5.py:471:0 %/block.0/layer.0/SelfAttention/Constant_output_0 : Long(device=cpu) = onnx::Constant[value={0}, onnx_name="/block.0/layer.0/SelfAttention/Constant"](), scope: transformers.models.t5.modeling_t5.T5Stack::/transformers.models.t5.modeling_t5.T5Block::block.0/transformers.models.t5.modeling_t5.T5LayerSelfAttention::layer.0/transformers.models.t5.modeling_t5.T5Attention::SelfAttention # /apps/bin/miniconda3/envs/on-device-llm/lib/python3.10/site-packages/transformers/models/t5/modeling_t5.py:471:0 %/block.0/layer.0/SelfAttention/Gather_output_0 : Long(device=cpu) = onnx::Gather[axis=0, onnx_name="/block.0/layer.0/SelfAttention/Gather"](%/block.0/layer.0/SelfAttention/Shape_output_0, %/block.0/layer.0/SelfAttention/Constant_output_0), scope: transformers.models.t5.modeling_t5.T5Stack::/transformers.models.t5.modeling_t5.T5Block::block.0/transformers.models.t5.modeling_t5.T5LayerSelfAttention::layer.0/transformers.models.t5.modeling_t5.T5Attention::SelfAttention # /apps/bin/miniconda3/envs/on-device-llm/lib/python3.10/site-packages/transformers/models/t5/modeling_t5.py:471:0 %/block.0/layer.0/SelfAttention/Shape_1_output_0 : Long(3, strides=[1], device=cpu) = onnx::Shape[onnx_name="/block.0/layer.0/SelfAttention/Shape_1"](%/block.0/layer.0/layer_norm/Mul_1_output_0), scope: transformers.models.t5.modeling_t5.T5Stack::/transformers.models.t5.modeling_t5.T5Block::block.0/transformers.models.t5.modeling_t5.T5LayerSelfAttention::layer.0/transformers.models.t5.modeling_t5.T5Attention::SelfAttention # /apps/bin/miniconda3/envs/on-device-llm/lib/python3.10/site-packages/transformers/models/t5/modeling_t5.py:471:0 %/block.0/layer.0/SelfAttention/Constant_1_output_0 : Long(device=cpu) = onnx::Constant[value={1}, onnx_name="/block.0/layer.0/SelfAttention/Constant_1"](), scope: transformers.models.t5.modeling_t5.T5Stack::/transformers.models.t5.modeling_t5.T5Block::block.0/transformers.models.t5.modeling_t5.T5LayerSelfAttention::layer.0/transformers.models.t5.modeling_t5.T5Attention::SelfAttention # /apps/bin/miniconda3/envs/on-device-llm/lib/python3.10/site-packages/transformers/models/t5/modeling_t5.py:471:0 %/block.0/layer.0/SelfAttention/Gather_1_output_0 : Long(device=cpu) = onnx::Gather[axis=0, onnx_name="/block.0/layer.0/SelfAttention/Gather_1"](%/block.0/layer.0/SelfAttention/Shape_1_output_0, %/block.0/layer.0/SelfAttention/Constant_1_output_0), scope: transformers.models.t5.modeling_t5.T5Stack::/transformers.models.t5.modeling_t5.T5Block::block.0/transformers.models.t5.modeling_t5.T5LayerSelfAttention::layer.0/transformers.models.t5.modeling_t5.T5Attention::SelfAttention # /apps/bin/miniconda3/envs/on-device-llm/lib/python3.10/site-packages/transformers/models/t5/modeling_t5.py:471:0 %/block.0/layer.0/SelfAttention/q/MatMul_output_0 : Float(*, *, 768, strides=[12288, 768, 1], requires_grad=0, device=cpu) = onnx::MatMul[onnx_name="/block.0/layer.0/SelfAttention/q/MatMul"](%/block.0/layer.0/layer_norm/Mul_1_output_0, %onnx::MatMul_1042), scope: transformers.models.t5.modeling_t5.T5Stack::/transformers.models.t5.modeling_t5.T5Block::block.0/transformers.models.t5.modeling_t5.T5LayerSelfAttention::layer.0/transformers.models.t5.modeling_t5.T5Attention::SelfAttention/torch.nn.modules.linear.Linear::q # /apps/bin/miniconda3/envs/on-device-llm/lib/python3.10/site-packages/torch/nn/modules/linear.py:114:0 %/block.0/layer.0/SelfAttention/Unsqueeze_output_0 : Long(1, strides=[1], device=cpu) = onnx::Unsqueeze[axes=[0], onnx_name="/block.0/layer.0/SelfAttention/Unsqueeze"](%/block.0/layer.0/SelfAttention/Gather_output_0), scope: transformers.models.t5.modeling_t5.T5Stack::/transformers.models.t5.modeling_t5.T5Block::block.0/transformers.models.t5.modeling_t5.T5LayerSelfAttention::layer.0/transformers.models.t5.modeling_t5.T5Attention::SelfAttention %/block.0/layer.0/SelfAttention/Constant_2_output_0 : Long(1, strides=[1], requires_grad=0, device=cpu) = onnx::Constant[value={-1}, onnx_name="/block.0/layer.0/SelfAttention/Constant_2"](), scope: transformers.models.t5.modeling_t5.T5Stack::/transformers.models.t5.modeling_t5.T5Block::block.0/transformers.models.t5.modeling_t5.T5LayerSelfAttention::layer.0/transformers.models.t5.modeling_t5.T5Attention::SelfAttention %/block.0/layer.0/SelfAttention/Constant_3_output_0 : Long(1, strides=[1], requires_grad=0, device=cpu) = onnx::Constant[value={12}, onnx_name="/block.0/layer.0/SelfAttention/Constant_3"](), scope: transformers.models.t5.modeling_t5.T5Stack::/transformers.models.t5.modeling_t5.T5Block::block.0/transformers.models.t5.modeling_t5.T5LayerSelfAttention::layer.0/transformers.models.t5.modeling_t5.T5Attention::SelfAttention %/block.0/layer.0/SelfAttention/Constant_4_output_0 : Long(1, strides=[1], requires_grad=0, device=cpu) = onnx::Constant[value={64}, onnx_name="/block.0/layer.0/SelfAttention/Constant_4"](), scope: transformers.models.t5.modeling_t5.T5Stack::/transformers.models.t5.modeling_t5.T5Block::block.0/transformers.models.t5.modeling_t5.T5LayerSelfAttention::layer.0/transformers.models.t5.modeling_t5.T5Attention::SelfAttention %/block.0/layer.0/SelfAttention/Concat_output_0 : Long(4, strides=[1], device=cpu) = onnx::Concat[axis=0, onnx_name="/block.0/layer.0/SelfAttention/Concat"](%/block.0/layer.0/SelfAttention/Unsqueeze_output_0, %/block.0/layer.0/SelfAttention/Constant_2_output_0, %/block.0/layer.0/SelfAttention/Constant_3_output_0, %/block.0/layer.0/SelfAttention/Constant_4_output_0), scope: transformers.models.t5.modeling_t5.T5Stack::/transformers.models.t5.modeling_t5.T5Block::block.0/transformers.models.t5.modeling_t5.T5LayerSelfAttention::layer.0/transformers.models.t5.modeling_t5.T5Attention::SelfAttention # /apps/bin/miniconda3/envs/on-device-llm/lib/python3.10/site-packages/transformers/models/t5/modeling_t5.py:485:0 %/block.0/layer.0/SelfAttention/Unsqueeze_1_output_0 : Long(1, strides=[1], device=cpu) = onnx::Unsqueeze[axes=[0], onnx_name="/block.0/layer.0/SelfAttention/Unsqueeze_1"](%/block.0/layer.0/SelfAttention/Gather_output_0), scope: transformers.models.t5.modeling_t5.T5Stack::/transformers.models.t5.modeling_t5.T5Block::block.0/transformers.models.t5.modeling_t5.T5LayerSelfAttention::layer.0/transformers.models.t5.modeling_t5.T5Attention::SelfAttention %/block.0/layer.0/SelfAttention/Constant_5_output_0 : Long(1, strides=[1], requires_grad=0, device=cpu) = onnx::Constant[value={-1}, onnx_name="/block.0/layer.0/SelfAttention/Constant_5"](), scope: transformers.models.t5.modeling_t5.T5Stack::/transformers.models.t5.modeling_t5.T5Block::block.0/transformers.models.t5.modeling_t5.T5LayerSelfAttention::layer.0/transformers.models.t5.modeling_t5.T5Attention::SelfAttention %/block.0/layer.0/SelfAttention/Constant_6_output_0 : Long(1, strides=[1], requires_grad=0, device=cpu) = onnx::Constant[value={12}, onnx_name="/block.0/layer.0/SelfAttention/Constant_6"](), scope: transformers.models.t5.modeling_t5.T5Stack::/transformers.models.t5.modeling_t5.T5Block::block.0/transformers.models.t5.modeling_t5.T5LayerSelfAttention::layer.0/transformers.models.t5.modeling_t5.T5Attention::SelfAttention %/block.0/layer.0/SelfAttention/Constant_7_output_0 : Long(1, strides=[1], requires_grad=0, device=cpu) = onnx::Constant[value={64}, onnx_name="/block.0/layer.0/SelfAttention/Constant_7"](), scope: transformers.models.t5.modeling_t5.T5Stack::/transformers.models.t5.modeling_t5.T5Block::block.0/transformers.models.t5.modeling_t5.T5LayerSelfAttention::layer.0/transformers.models.t5.modeling_t5.T5Attention::SelfAttention %/block.0/layer.0/SelfAttention/Concat_1_output_0 : Long(4, strides=[1], device=cpu) = onnx::Concat[axis=0, onnx_name="/block.0/layer.0/SelfAttention/Concat_1"](%/block.0/layer.0/SelfAttention/Unsqueeze_1_output_0, %/block.0/layer.0/SelfAttention/Constant_5_output_0, %/block.0/layer.0/SelfAttention/Constant_6_output_0, %/block.0/layer.0/SelfAttention/Constant_7_output_0), scope: transformers.models.t5.modeling_t5.T5Stack::/transformers.models.t5.modeling_t5.T5Block::block.0/transformers.models.t5.modeling_t5.T5LayerSelfAttention::layer.0/transformers.models.t5.modeling_t5.T5Attention::SelfAttention # /apps/bin/miniconda3/envs/on-device-llm/lib/python3.10/site-packages/transformers/models/t5/modeling_t5.py:485:0 %/block.0/layer.0/SelfAttention/Unsqueeze_2_output_0 : Long(1, strides=[1], device=cpu) = onnx::Unsqueeze[axes=[0], onnx_name="/block.0/layer.0/SelfAttention/Unsqueeze_2"](%/block.0/layer.0/SelfAttention/Gather_output_0), scope: transformers.models.t5.modeling_t5.T5Stack::/transformers.models.t5.modeling_t5.T5Block::block.0/transformers.models.t5.modeling_t5.T5LayerSelfAttention::layer.0/transformers.models.t5.modeling_t5.T5Attention::SelfAttention %/block.0/layer.0/SelfAttention/Constant_8_output_0 : Long(1, strides=[1], requires_grad=0, device=cpu) = onnx::Constant[value={-1}, onnx_name="/block.0/layer.0/SelfAttention/Constant_8"](), scope: transformers.models.t5.modeling_t5.T5Stack::/transformers.models.t5.modeling_t5.T5Block::block.0/transformers.models.t5.modeling_t5.T5LayerSelfAttention::layer.0/transformers.models.t5.modeling_t5.T5Attention::SelfAttention %/block.0/layer.0/SelfAttention/Constant_9_output_0 : Long(1, strides=[1], requires_grad=0, device=cpu) = onnx::Constant[value={12}, onnx_name="/block.0/layer.0/SelfAttention/Constant_9"](), scope: transformers.models.t5.modeling_t5.T5Stack::/transformers.models.t5.modeling_t5.T5Block::block.0/transformers.models.t5.modeling_t5.T5LayerSelfAttention::layer.0/transformers.models.t5.modeling_t5.T5Attention::SelfAttention %/block.0/layer.0/SelfAttention/Constant_10_output_0 : Long(1, strides=[1], requires_grad=0, device=cpu) = onnx::Constant[value={64}, onnx_name="/block.0/layer.0/SelfAttention/Constant_10"](), scope: transformers.models.t5.modeling_t5.T5Stack::/transformers.models.t5.modeling_t5.T5Block::block.0/transformers.models.t5.modeling_t5.T5LayerSelfAttention::layer.0/transformers.models.t5.modeling_t5.T5Attention::SelfAttention %/block.0/layer.0/SelfAttention/Concat_2_output_0 : Long(4, strides=[1], device=cpu) = onnx::Concat[axis=0, onnx_name="/block.0/layer.0/SelfAttention/Concat_2"](%/block.0/layer.0/SelfAttention/Unsqueeze_2_output_0, %/block.0/layer.0/SelfAttention/Constant_8_output_0, %/block.0/layer.0/SelfAttention/Constant_9_output_0, %/block.0/layer.0/SelfAttention/Constant_10_output_0), scope: transformers.models.t5.modeling_t5.T5Stack::/transformers.models.t5.modeling_t5.T5Block::block.0/transformers.models.t5.modeling_t5.T5LayerSelfAttention::layer.0/transformers.models.t5.modeling_t5.T5Attention::SelfAttention # /apps/bin/miniconda3/envs/on-device-llm/lib/python3.10/site-packages/transformers/models/t5/modeling_t5.py:485:0 %/block.0/layer.0/SelfAttention/Reshape_output_0 : Float(*, *, *, *, strides=[12288, 768, 64, 1], requires_grad=0, device=cpu) = onnx::Reshape[onnx_name="/block.0/layer.0/SelfAttention/Reshape"](%/block.0/layer.0/SelfAttention/q/MatMul_output_0, %/block.0/layer.0/SelfAttention/Concat_output_0), scope: transformers.models.t5.modeling_t5.T5Stack::/transformers.models.t5.modeling_t5.T5Block::block.0/transformers.models.t5.modeling_t5.T5LayerSelfAttention::layer.0/transformers.models.t5.modeling_t5.T5Attention::SelfAttention # /apps/bin/miniconda3/envs/on-device-llm/lib/python3.10/site-packages/transformers/models/t5/modeling_t5.py:485:0 %/block.0/layer.0/SelfAttention/Transpose_output_0 : Float(*, *, *, *, strides=[12288, 64, 768, 1], requires_grad=0, device=cpu) = onnx::Transpose[perm=[0, 2, 1, 3], onnx_name="/block.0/layer.0/SelfAttention/Transpose"](%/block.0/layer.0/SelfAttention/Reshape_output_0), scope: transformers.models.t5.modeling_t5.T5Stack::/transformers.models.t5.modeling_t5.T5Block::block.0/transformers.models.t5.modeling_t5.T5LayerSelfAttention::layer.0/transformers.models.t5.modeling_t5.T5Attention::SelfAttention # /apps/bin/miniconda3/envs/on-device-llm/lib/python3.10/site-packages/transformers/models/t5/modeling_t5.py:485:0 %/block.0/layer.0/SelfAttention/k/MatMul_output_0 : Float(*, *, 768, strides=[12288, 768, 1], requires_grad=0, device=cpu) = onnx::MatMul[onnx_name="/block.0/layer.0/SelfAttention/k/MatMul"](%/block.0/layer.0/layer_norm/Mul_1_output_0, %onnx::MatMul_1052), scope: transformers.models.t5.modeling_t5.T5Stack::/transformers.models.t5.modeling_t5.T5Block::block.0/transformers.models.t5.modeling_t5.T5LayerSelfAttention::layer.0/transformers.models.t5.modeling_t5.T5Attention::SelfAttention/torch.nn.modules.linear.Linear::k # /apps/bin/miniconda3/envs/on-device-llm/lib/python3.10/site-packages/torch/nn/modules/linear.py:114:0 %/block.0/layer.0/SelfAttention/Reshape_1_output_0 : Float(*, *, *, *, strides=[12288, 768, 64, 1], requires_grad=0, device=cpu) = onnx::Reshape[onnx_name="/block.0/layer.0/SelfAttention/Reshape_1"](%/block.0/layer.0/SelfAttention/k/MatMul_output_0, %/block.0/layer.0/SelfAttention/Concat_1_output_0), scope: transformers.models.t5.modeling_t5.T5Stack::/transformers.models.t5.modeling_t5.T5Block::block.0/transformers.models.t5.modeling_t5.T5LayerSelfAttention::layer.0/transformers.models.t5.modeling_t5.T5Attention::SelfAttention # /apps/bin/miniconda3/envs/on-device-llm/lib/python3.10/site-packages/transformers/models/t5/modeling_t5.py:485:0 %/block.0/layer.0/SelfAttention/v/MatMul_output_0 : Float(*, *, 768, strides=[12288, 768, 1], requires_grad=0, device=cpu) = onnx::MatMul[onnx_name="/block.0/layer.0/SelfAttention/v/MatMul"](%/block.0/layer.0/layer_norm/Mul_1_output_0, %onnx::MatMul_1053), scope: transformers.models.t5.modeling_t5.T5Stack::/transformers.models.t5.modeling_t5.T5Block::block.0/transformers.models.t5.modeling_t5.T5LayerSelfAttention::layer.0/transformers.models.t5.modeling_t5.T5Attention::SelfAttention/torch.nn.modules.linear.Linear::v # /apps/bin/miniconda3/envs/on-device-llm/lib/python3.10/site-packages/torch/nn/modules/linear.py:114:0 %/block.0/layer.0/SelfAttention/Reshape_2_output_0 : Float(*, *, *, *, strides=[12288, 768, 64, 1], requires_grad=0, device=cpu) = onnx::Reshape[onnx_name="/block.0/layer.0/SelfAttention/Reshape_2"](%/block.0/layer.0/SelfAttention/v/MatMul_output_0, %/block.0/layer.0/SelfAttention/Concat_2_output_0), scope: transformers.models.t5.modeling_t5.T5Stack::/transformers.models.t5.modeling_t5.T5Block::block.0/transformers.models.t5.modeling_t5.T5LayerSelfAttention::layer.0/transformers.models.t5.modeling_t5.T5Attention::SelfAttention # /apps/bin/miniconda3/envs/on-device-llm/lib/python3.10/site-packages/transformers/models/t5/modeling_t5.py:485:0 %/block.0/layer.0/SelfAttention/Transpose_1_output_0 : Float(*, *, *, *, strides=[12288, 64, 768, 1], requires_grad=0, device=cpu) = onnx::Transpose[perm=[0, 2, 1, 3], onnx_name="/block.0/layer.0/SelfAttention/Transpose_1"](%/block.0/layer.0/SelfAttention/Reshape_2_output_0), scope: transformers.models.t5.modeling_t5.T5Stack::/transformers.models.t5.modeling_t5.T5Block::block.0/transformers.models.t5.modeling_t5.T5LayerSelfAttention::layer.0/transformers.models.t5.modeling_t5.T5Attention::SelfAttention # /apps/bin/miniconda3/envs/on-device-llm/lib/python3.10/site-packages/transformers/models/t5/modeling_t5.py:485:0 %/block.0/layer.0/SelfAttention/Transpose_2_output_0 : Float(*, *, *, *, strides=[12288, 64, 1, 768], requires_grad=0, device=cpu) = onnx::Transpose[perm=[0, 2, 3, 1], onnx_name="/block.0/layer.0/SelfAttention/Transpose_2"](%/block.0/layer.0/SelfAttention/Reshape_1_output_0), scope: transformers.models.t5.modeling_t5.T5Stack::/transformers.models.t5.modeling_t5.T5Block::block.0/transformers.models.t5.modeling_t5.T5LayerSelfAttention::layer.0/transformers.models.t5.modeling_t5.T5Attention::SelfAttention # /apps/bin/miniconda3/envs/on-device-llm/lib/python3.10/site-packages/transformers/models/t5/modeling_t5.py:531:0 %/block.0/layer.0/SelfAttention/MatMul_output_0 : Float(*, *, *, *, strides=[3072, 256, 16, 1], requires_grad=0, device=cpu) = onnx::MatMul[onnx_name="/block.0/layer.0/SelfAttention/MatMul"](%/block.0/layer.0/SelfAttention/Transpose_output_0, %/block.0/layer.0/SelfAttention/Transpose_2_output_0), scope: transformers.models.t5.modeling_t5.T5Stack::/transformers.models.t5.modeling_t5.T5Block::block.0/transformers.models.t5.modeling_t5.T5LayerSelfAttention::layer.0/transformers.models.t5.modeling_t5.T5Attention::SelfAttention # /apps/bin/miniconda3/envs/on-device-llm/lib/python3.10/site-packages/transformers/models/t5/modeling_t5.py:530:0 %/block.0/layer.0/SelfAttention/Cast_output_0 : Long(device=cpu) = onnx::Cast[to=7, onnx_name="/block.0/layer.0/SelfAttention/Cast"](%/block.0/layer.0/SelfAttention/Gather_1_output_0), scope: transformers.models.t5.modeling_t5.T5Stack::/transformers.models.t5.modeling_t5.T5Block::block.0/transformers.models.t5.modeling_t5.T5LayerSelfAttention::layer.0/transformers.models.t5.modeling_t5.T5Attention::SelfAttention # /apps/bin/miniconda3/envs/on-device-llm/lib/python3.10/site-packages/transformers/models/t5/modeling_t5.py:440:0 %/block.0/layer.0/SelfAttention/Unsqueeze_3_output_0 : Long(1, strides=[1], device=cpu) = onnx::Unsqueeze[axes=[0], onnx_name="/block.0/layer.0/SelfAttention/Unsqueeze_3"](%/block.0/layer.0/SelfAttention/Cast_output_0), scope: transformers.models.t5.modeling_t5.T5Stack::/transformers.models.t5.modeling_t5.T5Block::block.0/transformers.models.t5.modeling_t5.T5LayerSelfAttention::layer.0/transformers.models.t5.modeling_t5.T5Attention::SelfAttention # /apps/bin/miniconda3/envs/on-device-llm/lib/python3.10/site-packages/transformers/models/t5/modeling_t5.py:440:0 %/block.0/layer.0/SelfAttention/ConstantOfShape_output_0 : Long(*, device=cpu) = onnx::ConstantOfShape[value={1}, onnx_name="/block.0/layer.0/SelfAttention/ConstantOfShape"](%/block.0/layer.0/SelfAttention/Unsqueeze_3_output_0), scope: transformers.models.t5.modeling_t5.T5Stack::/transformers.models.t5.modeling_t5.T5Block::block.0/transformers.models.t5.modeling_t5.T5LayerSelfAttention::layer.0/transformers.models.t5.modeling_t5.T5Attention::SelfAttention # /apps/bin/miniconda3/envs/on-device-llm/lib/python3.10/site-packages/transformers/models/t5/modeling_t5.py:440:0 %/block.0/layer.0/SelfAttention/NonZero_output_0 : Long(1, *, device=cpu) = onnx::NonZero[onnx_name="/block.0/layer.0/SelfAttention/NonZero"](%/block.0/layer.0/SelfAttention/ConstantOfShape_output_0), scope: transformers.models.t5.modeling_t5.T5Stack::/transformers.models.t5.modeling_t5.T5Block::block.0/transformers.models.t5.modeling_t5.T5LayerSelfAttention::layer.0/transformers.models.t5.modeling_t5.T5Attention::SelfAttention # /apps/bin/miniconda3/envs/on-device-llm/lib/python3.10/site-packages/transformers/models/t5/modeling_t5.py:440:0 %/block.0/layer.0/SelfAttention/Transpose_3_output_0 : Long(*, 1, device=cpu) = onnx::Transpose[perm=[1, 0], onnx_name="/block.0/layer.0/SelfAttention/Transpose_3"](%/block.0/layer.0/SelfAttention/NonZero_output_0), scope: transformers.models.t5.modeling_t5.T5Stack::/transformers.models.t5.modeling_t5.T5Block::block.0/transformers.models.t5.modeling_t5.T5LayerSelfAttention::layer.0/transformers.models.t5.modeling_t5.T5Attention::SelfAttention # /apps/bin/miniconda3/envs/on-device-llm/lib/python3.10/site-packages/transformers/models/t5/modeling_t5.py:440:0 %/block.0/layer.0/SelfAttention/Squeeze_output_0 : Long(*, device=cpu) = onnx::Squeeze[axes=[1], onnx_name="/block.0/layer.0/SelfAttention/Squeeze"](%/block.0/layer.0/SelfAttention/Transpose_3_output_0), scope: transformers.models.t5.modeling_t5.T5Stack::/transformers.models.t5.modeling_t5.T5Block::block.0/transformers.models.t5.modeling_t5.T5LayerSelfAttention::layer.0/transformers.models.t5.modeling_t5.T5Attention::SelfAttention # /apps/bin/miniconda3/envs/on-device-llm/lib/python3.10/site-packages/transformers/models/t5/modeling_t5.py:440:0 %/block.0/layer.0/SelfAttention/Cast_1_output_0 : Long(*, strides=[1], requires_grad=0, device=cpu) = onnx::Cast[to=7, onnx_name="/block.0/layer.0/SelfAttention/Cast_1"](%/block.0/layer.0/SelfAttention/Squeeze_output_0), scope: transformers.models.t5.modeling_t5.T5Stack::/transformers.models.t5.modeling_t5.T5Block::block.0/transformers.models.t5.modeling_t5.T5LayerSelfAttention::layer.0/transformers.models.t5.modeling_t5.T5Attention::SelfAttention # /apps/bin/miniconda3/envs/on-device-llm/lib/python3.10/site-packages/transformers/models/t5/modeling_t5.py:440:0 %/block.0/layer.0/SelfAttention/Unsqueeze_4_output_0 : Long(*, 1, strides=[1, 1], requires_grad=0, device=cpu) = onnx::Unsqueeze[axes=[1], onnx_name="/block.0/layer.0/SelfAttention/Unsqueeze_4"](%/block.0/layer.0/SelfAttention/Cast_1_output_0), scope: transformers.models.t5.modeling_t5.T5Stack::/transformers.models.t5.modeling_t5.T5Block::block.0/transformers.models.t5.modeling_t5.T5LayerSelfAttention::layer.0/transformers.models.t5.modeling_t5.T5Attention::SelfAttention # /apps/bin/miniconda3/envs/on-device-llm/lib/python3.10/site-packages/transformers/models/t5/modeling_t5.py:440:0 %/block.0/layer.0/SelfAttention/Unsqueeze_5_output_0 : Long(1, *, strides=[16, 1], requires_grad=0, device=cpu) = onnx::Unsqueeze[axes=[0], onnx_name="/block.0/layer.0/SelfAttention/Unsqueeze_5"](%/block.0/layer.0/SelfAttention/Cast_1_output_0), scope: transformers.models.t5.modeling_t5.T5Stack::/transformers.models.t5.modeling_t5.T5Block::block.0/transformers.models.t5.modeling_t5.T5LayerSelfAttention::layer.0/transformers.models.t5.modeling_t5.T5Attention::SelfAttention # /apps/bin/miniconda3/envs/on-device-llm/lib/python3.10/site-packages/transformers/models/t5/modeling_t5.py:441:0 %/block.0/layer.0/SelfAttention/Sub_output_0 : Long(*, *, strides=[16, 1], requires_grad=0, device=cpu) = onnx::Sub[onnx_name="/block.0/layer.0/SelfAttention/Sub"](%/block.0/layer.0/SelfAttention/Unsqueeze_5_output_0, %/block.0/layer.0/SelfAttention/Unsqueeze_4_output_0), scope: transformers.models.t5.modeling_t5.T5Stack::/transformers.models.t5.modeling_t5.T5Block::block.0/transformers.models.t5.modeling_t5.T5LayerSelfAttention::layer.0/transformers.models.t5.modeling_t5.T5Attention::SelfAttention # /apps/bin/miniconda3/envs/on-device-llm/lib/python3.10/site-packages/transformers/models/t5/modeling_t5.py:442:0 %/block.0/layer.0/SelfAttention/Constant_11_output_0 : Long(requires_grad=0, device=cpu) = onnx::Constant[value={0}, onnx_name="/block.0/layer.0/SelfAttention/Constant_11"](), scope: transformers.models.t5.modeling_t5.T5Stack::/transformers.models.t5.modeling_t5.T5Block::block.0/transformers.models.t5.modeling_t5.T5LayerSelfAttention::layer.0/transformers.models.t5.modeling_t5.T5Attention::SelfAttention # /apps/bin/miniconda3/envs/on-device-llm/lib/python3.10/site-packages/transformers/models/t5/modeling_t5.py:413:0 %/block.0/layer.0/SelfAttention/Greater_output_0 : Bool(*, *, strides=[16, 1], requires_grad=0, device=cpu) = onnx::Greater[onnx_name="/block.0/layer.0/SelfAttention/Greater"](%/block.0/layer.0/SelfAttention/Sub_output_0, %/block.0/layer.0/SelfAttention/Constant_11_output_0), scope: transformers.models.t5.modeling_t5.T5Stack::/transformers.models.t5.modeling_t5.T5Block::block.0/transformers.models.t5.modeling_t5.T5LayerSelfAttention::layer.0/transformers.models.t5.modeling_t5.T5Attention::SelfAttention # /apps/bin/miniconda3/envs/on-device-llm/lib/python3.10/site-packages/transformers/models/t5/modeling_t5.py:413:0 %/block.0/layer.0/SelfAttention/Cast_2_output_0 : Long(*, *, strides=[16, 1], requires_grad=0, device=cpu) = onnx::Cast[to=7, onnx_name="/block.0/layer.0/SelfAttention/Cast_2"](%/block.0/layer.0/SelfAttention/Greater_output_0), scope: transformers.models.t5.modeling_t5.T5Stack::/transformers.models.t5.modeling_t5.T5Block::block.0/transformers.models.t5.modeling_t5.T5LayerSelfAttention::layer.0/transformers.models.t5.modeling_t5.T5Attention::SelfAttention # /apps/bin/miniconda3/envs/on-device-llm/lib/python3.10/site-packages/transformers/models/t5/modeling_t5.py:413:0 %/block.0/layer.0/SelfAttention/Constant_12_output_0 : Long(requires_grad=0, device=cpu) = onnx::Constant[value={16}, onnx_name="/block.0/layer.0/SelfAttention/Constant_12"](), scope: transformers.models.t5.modeling_t5.T5Stack::/transformers.models.t5.modeling_t5.T5Block::block.0/transformers.models.t5.modeling_t5.T5LayerSelfAttention::layer.0/transformers.models.t5.modeling_t5.T5Attention::SelfAttention # /apps/bin/miniconda3/envs/on-device-llm/lib/python3.10/site-packages/transformers/models/t5/modeling_t5.py:413:0 %/block.0/layer.0/SelfAttention/Mul_output_0 : Long(*, *, strides=[16, 1], requires_grad=0, device=cpu) = onnx::Mul[onnx_name="/block.0/layer.0/SelfAttention/Mul"](%/block.0/layer.0/SelfAttention/Cast_2_output_0, %/block.0/layer.0/SelfAttention/Constant_12_output_0), scope: transformers.models.t5.modeling_t5.T5Stack::/transformers.models.t5.modeling_t5.T5Block::block.0/transformers.models.t5.modeling_t5.T5LayerSelfAttention::layer.0/transformers.models.t5.modeling_t5.T5Attention::SelfAttention # /apps/bin/miniconda3/envs/on-device-llm/lib/python3.10/site-packages/transformers/models/t5/modeling_t5.py:413:0 %/block.0/layer.0/SelfAttention/Constant_13_output_0 : Long(requires_grad=0, device=cpu) = onnx::Constant[value={0}, onnx_name="/block.0/layer.0/SelfAttention/Constant_13"](), scope: transformers.models.t5.modeling_t5.T5Stack::/transformers.models.t5.modeling_t5.T5Block::block.0/transformers.models.t5.modeling_t5.T5LayerSelfAttention::layer.0/transformers.models.t5.modeling_t5.T5Attention::SelfAttention # /apps/bin/miniconda3/envs/on-device-llm/lib/python3.10/site-packages/transformers/models/t5/modeling_t5.py:413:0 %/block.0/layer.0/SelfAttention/Add_output_0 : Long(*, *, strides=[16, 1], requires_grad=0, device=cpu) = onnx::Add[onnx_name="/block.0/layer.0/SelfAttention/Add"](%/block.0/layer.0/SelfAttention/Mul_output_0, %/block.0/layer.0/SelfAttention/Constant_13_output_0), scope: transformers.models.t5.modeling_t5.T5Stack::/transformers.models.t5.modeling_t5.T5Block::block.0/transformers.models.t5.modeling_t5.T5LayerSelfAttention::layer.0/transformers.models.t5.modeling_t5.T5Attention::SelfAttention # /apps/bin/miniconda3/envs/on-device-llm/lib/python3.10/site-packages/transformers/models/t5/modeling_t5.py:413:0 %/block.0/layer.0/SelfAttention/Abs_output_0 : Long(*, *, strides=[16, 1], requires_grad=0, device=cpu) = onnx::Abs[onnx_name="/block.0/layer.0/SelfAttention/Abs"](%/block.0/layer.0/SelfAttention/Sub_output_0), scope: transformers.models.t5.modeling_t5.T5Stack::/transformers.models.t5.modeling_t5.T5Block::block.0/transformers.models.t5.modeling_t5.T5LayerSelfAttention::layer.0/transformers.models.t5.modeling_t5.T5Attention::SelfAttention # /apps/bin/miniconda3/envs/on-device-llm/lib/python3.10/site-packages/transformers/models/t5/modeling_t5.py:414:0 %/block.0/layer.0/SelfAttention/Constant_14_output_0 : Long(requires_grad=0, device=cpu) = onnx::Constant[value={8}, onnx_name="/block.0/layer.0/SelfAttention/Constant_14"](), scope: transformers.models.t5.modeling_t5.T5Stack::/transformers.models.t5.modeling_t5.T5Block::block.0/transformers.models.t5.modeling_t5.T5LayerSelfAttention::layer.0/transformers.models.t5.modeling_t5.T5Attention::SelfAttention # /apps/bin/miniconda3/envs/on-device-llm/lib/python3.10/site-packages/transformers/models/t5/modeling_t5.py:421:0 %/block.0/layer.0/SelfAttention/Less_output_0 : Bool(*, *, strides=[16, 1], requires_grad=0, device=cpu) = onnx::Less[onnx_name="/block.0/layer.0/SelfAttention/Less"](%/block.0/layer.0/SelfAttention/Abs_output_0, %/block.0/layer.0/SelfAttention/Constant_14_output_0), scope: transformers.models.t5.modeling_t5.T5Stack::/transformers.models.t5.modeling_t5.T5Block::block.0/transformers.models.t5.modeling_t5.T5LayerSelfAttention::layer.0/transformers.models.t5.modeling_t5.T5Attention::SelfAttention # /apps/bin/miniconda3/envs/on-device-llm/lib/python3.10/site-packages/transformers/models/t5/modeling_t5.py:421:0 %/block.0/layer.0/SelfAttention/Cast_3_output_0 : Float(*, *, strides=[16, 1], requires_grad=0, device=cpu) = onnx::Cast[to=1, onnx_name="/block.0/layer.0/SelfAttention/Cast_3"](%/block.0/layer.0/SelfAttention/Abs_output_0), scope: transformers.models.t5.modeling_t5.T5Stack::/transformers.models.t5.modeling_t5.T5Block::block.0/transformers.models.t5.modeling_t5.T5LayerSelfAttention::layer.0/transformers.models.t5.modeling_t5.T5Attention::SelfAttention # /apps/bin/miniconda3/envs/on-device-llm/lib/python3.10/site-packages/transformers/models/t5/modeling_t5.py:425:0 %/block.0/layer.0/SelfAttention/Constant_15_output_0 : Float(requires_grad=0, device=cpu) = onnx::Constant[value={8}, onnx_name="/block.0/layer.0/SelfAttention/Constant_15"](), scope: transformers.models.t5.modeling_t5.T5Stack::/transformers.models.t5.modeling_t5.T5Block::block.0/transformers.models.t5.modeling_t5.T5LayerSelfAttention::layer.0/transformers.models.t5.modeling_t5.T5Attention::SelfAttention # /apps/bin/miniconda3/envs/on-device-llm/lib/python3.10/site-packages/transformers/models/t5/modeling_t5.py:425:0 %/block.0/layer.0/SelfAttention/Div_output_0 : Float(*, *, strides=[16, 1], requires_grad=0, device=cpu) = onnx::Div[onnx_name="/block.0/layer.0/SelfAttention/Div"](%/block.0/layer.0/SelfAttention/Cast_3_output_0, %/block.0/layer.0/SelfAttention/Constant_15_output_0), scope: transformers.models.t5.modeling_t5.T5Stack::/transformers.models.t5.modeling_t5.T5Block::block.0/transformers.models.t5.modeling_t5.T5LayerSelfAttention::layer.0/transformers.models.t5.modeling_t5.T5Attention::SelfAttention # /apps/bin/miniconda3/envs/on-device-llm/lib/python3.10/site-packages/transformers/models/t5/modeling_t5.py:425:0 %/block.0/layer.0/SelfAttention/Log_output_0 : Float(*, *, strides=[16, 1], requires_grad=0, device=cpu) = onnx::Log[onnx_name="/block.0/layer.0/SelfAttention/Log"](%/block.0/layer.0/SelfAttention/Div_output_0), scope: transformers.models.t5.modeling_t5.T5Stack::/transformers.models.t5.modeling_t5.T5Block::block.0/transformers.models.t5.modeling_t5.T5LayerSelfAttention::layer.0/transformers.models.t5.modeling_t5.T5Attention::SelfAttention # /apps/bin/miniconda3/envs/on-device-llm/lib/python3.10/site-packages/transformers/models/t5/modeling_t5.py:425:0 %/block.0/layer.0/SelfAttention/Constant_16_output_0 : Float(requires_grad=0, device=cpu) = onnx::Constant[value={2.77259}, onnx_name="/block.0/layer.0/SelfAttention/Constant_16"](), scope: transformers.models.t5.modeling_t5.T5Stack::/transformers.models.t5.modeling_t5.T5Block::block.0/transformers.models.t5.modeling_t5.T5LayerSelfAttention::layer.0/transformers.models.t5.modeling_t5.T5Attention::SelfAttention # /apps/bin/miniconda3/envs/on-device-llm/lib/python3.10/site-packages/transformers/models/t5/modeling_t5.py:425:0 %/block.0/layer.0/SelfAttention/Div_1_output_0 : Float(*, *, strides=[16, 1], requires_grad=0, device=cpu) = onnx::Div[onnx_name="/block.0/layer.0/SelfAttention/Div_1"](%/block.0/layer.0/SelfAttention/Log_output_0, %/block.0/layer.0/SelfAttention/Constant_16_output_0), scope: transformers.models.t5.modeling_t5.T5Stack::/transformers.models.t5.modeling_t5.T5Block::block.0/transformers.models.t5.modeling_t5.T5LayerSelfAttention::layer.0/transformers.models.t5.modeling_t5.T5Attention::SelfAttention # /apps/bin/miniconda3/envs/on-device-llm/lib/python3.10/site-packages/transformers/models/t5/modeling_t5.py:425:0 %/block.0/layer.0/SelfAttention/Constant_17_output_0 : Float(requires_grad=0, device=cpu) = onnx::Constant[value={8}, onnx_name="/block.0/layer.0/SelfAttention/Constant_17"](), scope: transformers.models.t5.modeling_t5.T5Stack::/transformers.models.t5.modeling_t5.T5Block::block.0/transformers.models.t5.modeling_t5.T5LayerSelfAttention::layer.0/transformers.models.t5.modeling_t5.T5Attention::SelfAttention # /apps/bin/miniconda3/envs/on-device-llm/lib/python3.10/site-packages/transformers/models/t5/modeling_t5.py:425:0 %/block.0/layer.0/SelfAttention/Mul_1_output_0 : Float(*, *, strides=[16, 1], requires_grad=0, device=cpu) = onnx::Mul[onnx_name="/block.0/layer.0/SelfAttention/Mul_1"](%/block.0/layer.0/SelfAttention/Div_1_output_0, %/block.0/layer.0/SelfAttention/Constant_17_output_0), scope: transformers.models.t5.modeling_t5.T5Stack::/transformers.models.t5.modeling_t5.T5Block::block.0/transformers.models.t5.modeling_t5.T5LayerSelfAttention::layer.0/transformers.models.t5.modeling_t5.T5Attention::SelfAttention # /apps/bin/miniconda3/envs/on-device-llm/lib/python3.10/site-packages/transformers/models/t5/modeling_t5.py:425:0 %/block.0/layer.0/SelfAttention/Cast_4_output_0 : Long(*, *, strides=[16, 1], requires_grad=0, device=cpu) = onnx::Cast[to=7, onnx_name="/block.0/layer.0/SelfAttention/Cast_4"](%/block.0/layer.0/SelfAttention/Mul_1_output_0), scope: transformers.models.t5.modeling_t5.T5Stack::/transformers.models.t5.modeling_t5.T5Block::block.0/transformers.models.t5.modeling_t5.T5LayerSelfAttention::layer.0/transformers.models.t5.modeling_t5.T5Attention::SelfAttention # /apps/bin/miniconda3/envs/on-device-llm/lib/python3.10/site-packages/transformers/models/t5/modeling_t5.py:428:0 %/block.0/layer.0/SelfAttention/Constant_18_output_0 : Long(requires_grad=0, device=cpu) = onnx::Constant[value={8}, onnx_name="/block.0/layer.0/SelfAttention/Constant_18"](), scope: transformers.models.t5.modeling_t5.T5Stack::/transformers.models.t5.modeling_t5.T5Block::block.0/transformers.models.t5.modeling_t5.T5LayerSelfAttention::layer.0/transformers.models.t5.modeling_t5.T5Attention::SelfAttention # /apps/bin/miniconda3/envs/on-device-llm/lib/python3.10/site-packages/transformers/models/t5/modeling_t5.py:424:0 %/block.0/layer.0/SelfAttention/Add_1_output_0 : Long(*, *, strides=[16, 1], requires_grad=0, device=cpu) = onnx::Add[onnx_name="/block.0/layer.0/SelfAttention/Add_1"](%/block.0/layer.0/SelfAttention/Cast_4_output_0, %/block.0/layer.0/SelfAttention/Constant_18_output_0), scope: transformers.models.t5.modeling_t5.T5Stack::/transformers.models.t5.modeling_t5.T5Block::block.0/transformers.models.t5.modeling_t5.T5LayerSelfAttention::layer.0/transformers.models.t5.modeling_t5.T5Attention::SelfAttention # /apps/bin/miniconda3/envs/on-device-llm/lib/python3.10/site-packages/transformers/models/t5/modeling_t5.py:424:0 %/block.0/layer.0/SelfAttention/Shape_2_output_0 : Long(2, strides=[1], device=cpu) = onnx::Shape[onnx_name="/block.0/layer.0/SelfAttention/Shape_2"](%/block.0/layer.0/SelfAttention/Add_1_output_0), scope: transformers.models.t5.modeling_t5.T5Stack::/transformers.models.t5.modeling_t5.T5Block::block.0/transformers.models.t5.modeling_t5.T5LayerSelfAttention::layer.0/transformers.models.t5.modeling_t5.T5Attention::SelfAttention # /apps/bin/miniconda3/envs/on-device-llm/lib/python3.10/site-packages/transformers/models/t5/modeling_t5.py:430:0 %/block.0/layer.0/SelfAttention/ConstantOfShape_1_output_0 : Long(*, *, strides=[16, 1], requires_grad=0, device=cpu) = onnx::ConstantOfShape[value={15}, onnx_name="/block.0/layer.0/SelfAttention/ConstantOfShape_1"](%/block.0/layer.0/SelfAttention/Shape_2_output_0), scope: transformers.models.t5.modeling_t5.T5Stack::/transformers.models.t5.modeling_t5.T5Block::block.0/transformers.models.t5.modeling_t5.T5LayerSelfAttention::layer.0/transformers.models.t5.modeling_t5.T5Attention::SelfAttention # /apps/bin/miniconda3/envs/on-device-llm/lib/python3.10/site-packages/transformers/models/t5/modeling_t5.py:430:0 %/block.0/layer.0/SelfAttention/Min_output_0 : Long(*, *, device=cpu) = onnx::Min[onnx_name="/block.0/layer.0/SelfAttention/Min"](%/block.0/layer.0/SelfAttention/Add_1_output_0, %/block.0/layer.0/SelfAttention/ConstantOfShape_1_output_0), scope: transformers.models.t5.modeling_t5.T5Stack::/transformers.models.t5.modeling_t5.T5Block::block.0/transformers.models.t5.modeling_t5.T5LayerSelfAttention::layer.0/transformers.models.t5.modeling_t5.T5Attention::SelfAttention # /apps/bin/miniconda3/envs/on-device-llm/lib/python3.10/site-packages/transformers/models/t5/modeling_t5.py:429:0 %/block.0/layer.0/SelfAttention/Cast_5_output_0 : Long(*, *, strides=[16, 1], requires_grad=0, device=cpu) = onnx::Cast[to=7, onnx_name="/block.0/layer.0/SelfAttention/Cast_5"](%/block.0/layer.0/SelfAttention/Min_output_0), scope: transformers.models.t5.modeling_t5.T5Stack::/transformers.models.t5.modeling_t5.T5Block::block.0/transformers.models.t5.modeling_t5.T5LayerSelfAttention::layer.0/transformers.models.t5.modeling_t5.T5Attention::SelfAttention # /apps/bin/miniconda3/envs/on-device-llm/lib/python3.10/site-packages/transformers/models/t5/modeling_t5.py:429:0 %/block.0/layer.0/SelfAttention/Where_output_0 : Long(*, *, strides=[16, 1], requires_grad=0, device=cpu) = onnx::Where[onnx_name="/block.0/layer.0/SelfAttention/Where"](%/block.0/layer.0/SelfAttention/Less_output_0, %/block.0/layer.0/SelfAttention/Abs_output_0, %/block.0/layer.0/SelfAttention/Cast_5_output_0), scope: transformers.models.t5.modeling_t5.T5Stack::/transformers.models.t5.modeling_t5.T5Block::block.0/transformers.models.t5.modeling_t5.T5LayerSelfAttention::layer.0/transformers.models.t5.modeling_t5.T5Attention::SelfAttention # /apps/bin/miniconda3/envs/on-device-llm/lib/python3.10/site-packages/transformers/models/t5/modeling_t5.py:433:0 %/block.0/layer.0/SelfAttention/Add_2_output_0 : Long(*, *, strides=[16, 1], requires_grad=0, device=cpu) = onnx::Add[onnx_name="/block.0/layer.0/SelfAttention/Add_2"](%/block.0/layer.0/SelfAttention/Add_output_0, %/block.0/layer.0/SelfAttention/Where_output_0), scope: transformers.models.t5.modeling_t5.T5Stack::/transformers.models.t5.modeling_t5.T5Block::block.0/transformers.models.t5.modeling_t5.T5LayerSelfAttention::layer.0/transformers.models.t5.modeling_t5.T5Attention::SelfAttention # /apps/bin/miniconda3/envs/on-device-llm/lib/python3.10/site-packages/transformers/models/t5/modeling_t5.py:433:0 %/block.0/layer.0/SelfAttention/relative_attention_bias/Gather_output_0 : Float(*, *, 12, strides=[192, 12, 1], requires_grad=0, device=cpu) = onnx::Gather[onnx_name="/block.0/layer.0/SelfAttention/relative_attention_bias/Gather"](%block.0.layer.0.SelfAttention.relative_attention_bias.weight, %/block.0/layer.0/SelfAttention/Add_2_output_0), scope: transformers.models.t5.modeling_t5.T5Stack::/transformers.models.t5.modeling_t5.T5Block::block.0/transformers.models.t5.modeling_t5.T5LayerSelfAttention::layer.0/transformers.models.t5.modeling_t5.T5Attention::SelfAttention/torch.nn.modules.sparse.Embedding::relative_attention_bias # /apps/bin/miniconda3/envs/on-device-llm/lib/python3.10/site-packages/torch/nn/functional.py:2210:0 %/block.0/layer.0/SelfAttention/Transpose_4_output_0 : Float(12, *, *, strides=[1, 192, 12], requires_grad=0, device=cpu) = onnx::Transpose[perm=[2, 0, 1], onnx_name="/block.0/layer.0/SelfAttention/Transpose_4"](%/block.0/layer.0/SelfAttention/relative_attention_bias/Gather_output_0), scope: transformers.models.t5.modeling_t5.T5Stack::/transformers.models.t5.modeling_t5.T5Block::block.0/transformers.models.t5.modeling_t5.T5LayerSelfAttention::layer.0/transformers.models.t5.modeling_t5.T5Attention::SelfAttention # /apps/bin/miniconda3/envs/on-device-llm/lib/python3.10/site-packages/transformers/models/t5/modeling_t5.py:450:0 %/block.0/layer.0/SelfAttention/Unsqueeze_6_output_0 : Float(1, 12, *, *, strides=[12, 1, 192, 12], requires_grad=0, device=cpu) = onnx::Unsqueeze[axes=[0], onnx_name="/block.0/layer.0/SelfAttention/Unsqueeze_6"](%/block.0/layer.0/SelfAttention/Transpose_4_output_0), scope: transformers.models.t5.modeling_t5.T5Stack::/transformers.models.t5.modeling_t5.T5Block::block.0/transformers.models.t5.modeling_t5.T5LayerSelfAttention::layer.0/transformers.models.t5.modeling_t5.T5Attention::SelfAttention # /apps/bin/miniconda3/envs/on-device-llm/lib/python3.10/site-packages/transformers/models/t5/modeling_t5.py:450:0 %/block.0/layer.0/SelfAttention/Add_3_output_0 : Float(*, 12, *, *, strides=[3072, 1, 192, 12], requires_grad=0, device=cpu) = onnx::Add[onnx_name="/block.0/layer.0/SelfAttention/Add_3"](%/block.0/layer.0/SelfAttention/Unsqueeze_6_output_0, %/Mul_output_0), scope: transformers.models.t5.modeling_t5.T5Stack::/transformers.models.t5.modeling_t5.T5Block::block.0/transformers.models.t5.modeling_t5.T5LayerSelfAttention::layer.0/transformers.models.t5.modeling_t5.T5Attention::SelfAttention # /apps/bin/miniconda3/envs/on-device-llm/lib/python3.10/site-packages/transformers/models/t5/modeling_t5.py:550:0 %/block.0/layer.0/SelfAttention/Add_4_output_0 : Float(*, *, *, *, strides=[3072, 256, 16, 1], requires_grad=0, device=cpu) = onnx::Add[onnx_name="/block.0/layer.0/SelfAttention/Add_4"](%/block.0/layer.0/SelfAttention/MatMul_output_0, %/block.0/layer.0/SelfAttention/Add_3_output_0), scope: transformers.models.t5.modeling_t5.T5Stack::/transformers.models.t5.modeling_t5.T5Block::block.0/transformers.models.t5.modeling_t5.T5LayerSelfAttention::layer.0/transformers.models.t5.modeling_t5.T5Attention::SelfAttention # /apps/bin/miniconda3/envs/on-device-llm/lib/python3.10/site-packages/transformers/models/t5/modeling_t5.py:559:0 %/block.0/layer.0/SelfAttention/Cast_6_output_0 : Float(*, *, *, *, strides=[3072, 256, 16, 1], requires_grad=0, device=cpu) = onnx::Cast[to=1, onnx_name="/block.0/layer.0/SelfAttention/Cast_6"](%/block.0/layer.0/SelfAttention/Add_4_output_0), scope: transformers.models.t5.modeling_t5.T5Stack::/transformers.models.t5.modeling_t5.T5Block::block.0/transformers.models.t5.modeling_t5.T5LayerSelfAttention::layer.0/transformers.models.t5.modeling_t5.T5Attention::SelfAttention # /apps/bin/miniconda3/envs/on-device-llm/lib/python3.10/site-packages/transformers/models/t5/modeling_t5.py:560:0 %/block.0/layer.0/SelfAttention/Softmax_output_0 : Float(*, *, *, *, strides=[3072, 256, 16, 1], requires_grad=0, device=cpu) = onnx::Softmax[axis=3, onnx_name="/block.0/layer.0/SelfAttention/Softmax"](%/block.0/layer.0/SelfAttention/Cast_6_output_0), scope: transformers.models.t5.modeling_t5.T5Stack::/transformers.models.t5.modeling_t5.T5Block::block.0/transformers.models.t5.modeling_t5.T5LayerSelfAttention::layer.0/transformers.models.t5.modeling_t5.T5Attention::SelfAttention # /apps/bin/miniconda3/envs/on-device-llm/lib/python3.10/site-packages/torch/nn/functional.py:1843:0 %/block.0/layer.0/SelfAttention/MatMul_1_output_0 : Float(*, *, *, *, strides=[12288, 1024, 64, 1], requires_grad=0, device=cpu) = onnx::MatMul[onnx_name="/block.0/layer.0/SelfAttention/MatMul_1"](%/block.0/layer.0/SelfAttention/Softmax_output_0, %/block.0/layer.0/SelfAttention/Transpose_1_output_0), scope: transformers.models.t5.modeling_t5.T5Stack::/transformers.models.t5.modeling_t5.T5Block::block.0/transformers.models.t5.modeling_t5.T5LayerSelfAttention::layer.0/transformers.models.t5.modeling_t5.T5Attention::SelfAttention # /apps/bin/miniconda3/envs/on-device-llm/lib/python3.10/site-packages/transformers/models/t5/modeling_t5.py:571:0 %/block.0/layer.0/SelfAttention/Transpose_5_output_0 : Float(*, *, *, *, strides=[12288, 768, 64, 1], requires_grad=0, device=cpu) = onnx::Transpose[perm=[0, 2, 1, 3], onnx_name="/block.0/layer.0/SelfAttention/Transpose_5"](%/block.0/layer.0/SelfAttention/MatMul_1_output_0), scope: transformers.models.t5.modeling_t5.T5Stack::/transformers.models.t5.modeling_t5.T5Block::block.0/transformers.models.t5.modeling_t5.T5LayerSelfAttention::layer.0/transformers.models.t5.modeling_t5.T5Attention::SelfAttention # /apps/bin/miniconda3/envs/on-device-llm/lib/python3.10/site-packages/transformers/models/t5/modeling_t5.py:489:0 %/block.0/layer.0/SelfAttention/Unsqueeze_7_output_0 : Long(1, strides=[1], device=cpu) = onnx::Unsqueeze[axes=[0], onnx_name="/block.0/layer.0/SelfAttention/Unsqueeze_7"](%/block.0/layer.0/SelfAttention/Gather_output_0), scope: transformers.models.t5.modeling_t5.T5Stack::/transformers.models.t5.modeling_t5.T5Block::block.0/transformers.models.t5.modeling_t5.T5LayerSelfAttention::layer.0/transformers.models.t5.modeling_t5.T5Attention::SelfAttention %/block.0/layer.0/SelfAttention/Constant_19_output_0 : Long(1, strides=[1], requires_grad=0, device=cpu) = onnx::Constant[value={-1}, onnx_name="/block.0/layer.0/SelfAttention/Constant_19"](), scope: transformers.models.t5.modeling_t5.T5Stack::/transformers.models.t5.modeling_t5.T5Block::block.0/transformers.models.t5.modeling_t5.T5LayerSelfAttention::layer.0/transformers.models.t5.modeling_t5.T5Attention::SelfAttention %/block.0/layer.0/SelfAttention/Constant_20_output_0 : Long(1, strides=[1], requires_grad=0, device=cpu) = onnx::Constant[value={768}, onnx_name="/block.0/layer.0/SelfAttention/Constant_20"](), scope: transformers.models.t5.modeling_t5.T5Stack::/transformers.models.t5.modeling_t5.T5Block::block.0/transformers.models.t5.modeling_t5.T5LayerSelfAttention::layer.0/transformers.models.t5.modeling_t5.T5Attention::SelfAttention %/block.0/layer.0/SelfAttention/Concat_3_output_0 : Long(3, strides=[1], device=cpu) = onnx::Concat[axis=0, onnx_name="/block.0/layer.0/SelfAttention/Concat_3"](%/block.0/layer.0/SelfAttention/Unsqueeze_7_output_0, %/block.0/layer.0/SelfAttention/Constant_19_output_0, %/block.0/layer.0/SelfAttention/Constant_20_output_0), scope: transformers.models.t5.modeling_t5.T5Stack::/transformers.models.t5.modeling_t5.T5Block::block.0/transformers.models.t5.modeling_t5.T5LayerSelfAttention::layer.0/transformers.models.t5.modeling_t5.T5Attention::SelfAttention # /apps/bin/miniconda3/envs/on-device-llm/lib/python3.10/site-packages/transformers/models/t5/modeling_t5.py:489:0 %/block.0/layer.0/SelfAttention/Reshape_3_output_0 : Float(*, *, *, strides=[12288, 768, 1], requires_grad=0, device=cpu) = onnx::Reshape[onnx_name="/block.0/layer.0/SelfAttention/Reshape_3"](%/block.0/layer.0/SelfAttention/Transpose_5_output_0, %/block.0/layer.0/SelfAttention/Concat_3_output_0), scope: transformers.models.t5.modeling_t5.T5Stack::/transformers.models.t5.modeling_t5.T5Block::block.0/transformers.models.t5.modeling_t5.T5LayerSelfAttention::layer.0/transformers.models.t5.modeling_t5.T5Attention::SelfAttention # /apps/bin/miniconda3/envs/on-device-llm/lib/python3.10/site-packages/transformers/models/t5/modeling_t5.py:489:0 %/block.0/layer.0/SelfAttention/o/MatMul_output_0 : Float(*, *, 768, strides=[12288, 768, 1], requires_grad=0, device=cpu) = onnx::MatMul[onnx_name="/block.0/layer.0/SelfAttention/o/MatMul"](%/block.0/layer.0/SelfAttention/Reshape_3_output_0, %onnx::MatMul_1056), scope: transformers.models.t5.modeling_t5.T5Stack::/transformers.models.t5.modeling_t5.T5Block::block.0/transformers.models.t5.modeling_t5.T5LayerSelfAttention::layer.0/transformers.models.t5.modeling_t5.T5Attention::SelfAttention/torch.nn.modules.linear.Linear::o # /apps/bin/miniconda3/envs/on-device-llm/lib/python3.10/site-packages/torch/nn/modules/linear.py:114:0 %/block.0/layer.0/Add_output_0 : Float(*, *, 768, strides=[12288, 768, 1], requires_grad=0, device=cpu) = onnx::Add[onnx_name="/block.0/layer.0/Add"](%/block.0/layer.0/layer_norm/Cast_output_0, %/block.0/layer.0/SelfAttention/o/MatMul_output_0), scope: transformers.models.t5.modeling_t5.T5Stack::/transformers.models.t5.modeling_t5.T5Block::block.0/transformers.models.t5.modeling_t5.T5LayerSelfAttention::layer.0 # /apps/bin/miniconda3/envs/on-device-llm/lib/python3.10/site-packages/transformers/models/t5/modeling_t5.py:609:0 %/block.0/layer.1/layer_norm/Cast_output_0 : Float(*, *, 768, strides=[12288, 768, 1], requires_grad=0, device=cpu) = onnx::Cast[to=1, onnx_name="/block.0/layer.1/layer_norm/Cast"](%/block.0/layer.0/Add_output_0), scope: transformers.models.t5.modeling_t5.T5Stack::/transformers.models.t5.modeling_t5.T5Block::block.0/transformers.models.t5.modeling_t5.T5LayerFF::layer.1/transformers.models.t5.modeling_t5.T5LayerNorm::layer_norm # /apps/bin/miniconda3/envs/on-device-llm/lib/python3.10/site-packages/transformers/models/t5/modeling_t5.py:253:0 %/block.0/layer.1/layer_norm/Constant_output_0 : Float(requires_grad=0, device=cpu) = onnx::Constant[value={2}, onnx_name="/block.0/layer.1/layer_norm/Constant"](), scope: transformers.models.t5.modeling_t5.T5Stack::/transformers.models.t5.modeling_t5.T5Block::block.0/transformers.models.t5.modeling_t5.T5LayerFF::layer.1/transformers.models.t5.modeling_t5.T5LayerNorm::layer_norm # /apps/bin/miniconda3/envs/on-device-llm/lib/python3.10/site-packages/transformers/models/t5/modeling_t5.py:253:0 %/block.0/layer.1/layer_norm/Pow_output_0 : Float(*, *, 768, strides=[12288, 768, 1], requires_grad=0, device=cpu) = onnx::Pow[onnx_name="/block.0/layer.1/layer_norm/Pow"](%/block.0/layer.1/layer_norm/Cast_output_0, %/block.0/layer.1/layer_norm/Constant_output_0), scope: transformers.models.t5.modeling_t5.T5Stack::/transformers.models.t5.modeling_t5.T5Block::block.0/transformers.models.t5.modeling_t5.T5LayerFF::layer.1/transformers.models.t5.modeling_t5.T5LayerNorm::layer_norm # /apps/bin/miniconda3/envs/on-device-llm/lib/python3.10/site-packages/transformers/models/t5/modeling_t5.py:253:0 %/block.0/layer.1/layer_norm/ReduceMean_output_0 : Float(*, *, 1, strides=[16, 1, 1], requires_grad=0, device=cpu) = onnx::ReduceMean[axes=[-1], keepdims=1, onnx_name="/block.0/layer.1/layer_norm/ReduceMean"](%/block.0/layer.1/layer_norm/Pow_output_0), scope: transformers.models.t5.modeling_t5.T5Stack::/transformers.models.t5.modeling_t5.T5Block::block.0/transformers.models.t5.modeling_t5.T5LayerFF::layer.1/transformers.models.t5.modeling_t5.T5LayerNorm::layer_norm # /apps/bin/miniconda3/envs/on-device-llm/lib/python3.10/site-packages/transformers/models/t5/modeling_t5.py:253:0 %/block.0/layer.1/layer_norm/Constant_1_output_0 : Float(requires_grad=0, device=cpu) = onnx::Constant[value={1e-06}, onnx_name="/block.0/layer.1/layer_norm/Constant_1"](), scope: transformers.models.t5.modeling_t5.T5Stack::/transformers.models.t5.modeling_t5.T5Block::block.0/transformers.models.t5.modeling_t5.T5LayerFF::layer.1/transformers.models.t5.modeling_t5.T5LayerNorm::layer_norm # /apps/bin/miniconda3/envs/on-device-llm/lib/python3.10/site-packages/transformers/models/t5/modeling_t5.py:254:0 %/block.0/layer.1/layer_norm/Add_output_0 : Float(*, *, 1, strides=[16, 1, 1], requires_grad=0, device=cpu) = onnx::Add[onnx_name="/block.0/layer.1/layer_norm/Add"](%/block.0/layer.1/layer_norm/ReduceMean_output_0, %/block.0/layer.1/layer_norm/Constant_1_output_0), scope: transformers.models.t5.modeling_t5.T5Stack::/transformers.models.t5.modeling_t5.T5Block::block.0/transformers.models.t5.modeling_t5.T5LayerFF::layer.1/transformers.models.t5.modeling_t5.T5LayerNorm::layer_norm # /apps/bin/miniconda3/envs/on-device-llm/lib/python3.10/site-packages/transformers/models/t5/modeling_t5.py:254:0 %/block.0/layer.1/layer_norm/Sqrt_output_0 : Float(*, *, 1, device=cpu) = onnx::Sqrt[onnx_name="/block.0/layer.1/layer_norm/Sqrt"](%/block.0/layer.1/layer_norm/Add_output_0), scope: transformers.models.t5.modeling_t5.T5Stack::/transformers.models.t5.modeling_t5.T5Block::block.0/transformers.models.t5.modeling_t5.T5LayerFF::layer.1/transformers.models.t5.modeling_t5.T5LayerNorm::layer_norm # /apps/bin/miniconda3/envs/on-device-llm/lib/python3.10/site-packages/transformers/models/t5/modeling_t5.py:254:0 %/block.0/layer.1/layer_norm/Constant_2_output_0 : Float(requires_grad=0, device=cpu) = onnx::Constant[value={1}, onnx_name="/block.0/layer.1/layer_norm/Constant_2"](), scope: transformers.models.t5.modeling_t5.T5Stack::/transformers.models.t5.modeling_t5.T5Block::block.0/transformers.models.t5.modeling_t5.T5LayerFF::layer.1/transformers.models.t5.modeling_t5.T5LayerNorm::layer_norm # /apps/bin/miniconda3/envs/on-device-llm/lib/python3.10/site-packages/transformers/models/t5/modeling_t5.py:254:0 %/block.0/layer.1/layer_norm/Div_output_0 : Float(*, *, 1, strides=[16, 1, 1], requires_grad=0, device=cpu) = onnx::Div[onnx_name="/block.0/layer.1/layer_norm/Div"](%/block.0/layer.1/layer_norm/Constant_2_output_0, %/block.0/layer.1/layer_norm/Sqrt_output_0), scope: transformers.models.t5.modeling_t5.T5Stack::/transformers.models.t5.modeling_t5.T5Block::block.0/transformers.models.t5.modeling_t5.T5LayerFF::layer.1/transformers.models.t5.modeling_t5.T5LayerNorm::layer_norm # /apps/bin/miniconda3/envs/on-device-llm/lib/python3.10/site-packages/transformers/models/t5/modeling_t5.py:254:0 %/block.0/layer.1/layer_norm/Mul_output_0 : Float(*, *, 768, strides=[12288, 768, 1], requires_grad=0, device=cpu) = onnx::Mul[onnx_name="/block.0/layer.1/layer_norm/Mul"](%/block.0/layer.1/layer_norm/Cast_output_0, %/block.0/layer.1/layer_norm/Div_output_0), scope: transformers.models.t5.modeling_t5.T5Stack::/transformers.models.t5.modeling_t5.T5Block::block.0/transformers.models.t5.modeling_t5.T5LayerFF::layer.1/transformers.models.t5.modeling_t5.T5LayerNorm::layer_norm # /apps/bin/miniconda3/envs/on-device-llm/lib/python3.10/site-packages/transformers/models/t5/modeling_t5.py:254:0 %/block.0/layer.1/layer_norm/Mul_1_output_0 : Float(*, *, 768, strides=[12288, 768, 1], requires_grad=0, device=cpu) = onnx::Mul[onnx_name="/block.0/layer.1/layer_norm/Mul_1"](%block.0.layer.1.layer_norm.weight, %/block.0/layer.1/layer_norm/Mul_output_0), scope: transformers.models.t5.modeling_t5.T5Stack::/transformers.models.t5.modeling_t5.T5Block::block.0/transformers.models.t5.modeling_t5.T5LayerFF::layer.1/transformers.models.t5.modeling_t5.T5LayerNorm::layer_norm # /apps/bin/miniconda3/envs/on-device-llm/lib/python3.10/site-packages/transformers/models/t5/modeling_t5.py:260:0 %/block.0/layer.1/DenseReluDense/wi/MatMul_output_0 : Float(*, *, 3072, strides=[49152, 3072, 1], requires_grad=0, device=cpu) = onnx::MatMul[onnx_name="/block.0/layer.1/DenseReluDense/wi/MatMul"](%/block.0/layer.1/layer_norm/Mul_1_output_0, %onnx::MatMul_1058), scope: transformers.models.t5.modeling_t5.T5Stack::/transformers.models.t5.modeling_t5.T5Block::block.0/transformers.models.t5.modeling_t5.T5LayerFF::layer.1/transformers.models.t5.modeling_t5.T5DenseActDense::DenseReluDense/torch.nn.modules.linear.Linear::wi # /apps/bin/miniconda3/envs/on-device-llm/lib/python3.10/site-packages/torch/nn/modules/linear.py:114:0 %/block.0/layer.1/DenseReluDense/act/Relu_output_0 : Float(*, *, 3072, strides=[49152, 3072, 1], requires_grad=0, device=cpu) = onnx::Relu[onnx_name="/block.0/layer.1/DenseReluDense/act/Relu"](%/block.0/layer.1/DenseReluDense/wi/MatMul_output_0), scope: transformers.models.t5.modeling_t5.T5Stack::/transformers.models.t5.modeling_t5.T5Block::block.0/transformers.models.t5.modeling_t5.T5LayerFF::layer.1/transformers.models.t5.modeling_t5.T5DenseActDense::DenseReluDense/torch.nn.modules.activation.ReLU::act # /apps/bin/miniconda3/envs/on-device-llm/lib/python3.10/site-packages/torch/nn/functional.py:1457:0 %/block.0/layer.1/DenseReluDense/wo/MatMul_output_0 : Float(*, *, 768, strides=[12288, 768, 1], requires_grad=0, device=cpu) = onnx::MatMul[onnx_name="/block.0/layer.1/DenseReluDense/wo/MatMul"](%/block.0/layer.1/DenseReluDense/act/Relu_output_0, %onnx::MatMul_1059), scope: transformers.models.t5.modeling_t5.T5Stack::/transformers.models.t5.modeling_t5.T5Block::block.0/transformers.models.t5.modeling_t5.T5LayerFF::layer.1/transformers.models.t5.modeling_t5.T5DenseActDense::DenseReluDense/torch.nn.modules.linear.Linear::wo # /apps/bin/miniconda3/envs/on-device-llm/lib/python3.10/site-packages/torch/nn/modules/linear.py:114:0 %/block.0/layer.1/Add_output_0 : Float(*, *, 768, strides=[12288, 768, 1], requires_grad=0, device=cpu) = onnx::Add[onnx_name="/block.0/layer.1/Add"](%/block.0/layer.1/layer_norm/Cast_output_0, %/block.0/layer.1/DenseReluDense/wo/MatMul_output_0), scope: transformers.models.t5.modeling_t5.T5Stack::/transformers.models.t5.modeling_t5.T5Block::block.0/transformers.models.t5.modeling_t5.T5LayerFF::layer.1 # /apps/bin/miniconda3/envs/on-device-llm/lib/python3.10/site-packages/transformers/models/t5/modeling_t5.py:344:0 %/block.1/layer.0/layer_norm/Cast_output_0 : Float(*, *, 768, strides=[12288, 768, 1], requires_grad=0, device=cpu) = onnx::Cast[to=1, onnx_name="/block.1/layer.0/layer_norm/Cast"](%/block.0/layer.1/Add_output_0), scope: transformers.models.t5.modeling_t5.T5Stack::/transformers.models.t5.modeling_t5.T5Block::block.1/transformers.models.t5.modeling_t5.T5LayerSelfAttention::layer.0/transformers.models.t5.modeling_t5.T5LayerNorm::layer_norm # /apps/bin/miniconda3/envs/on-device-llm/lib/python3.10/site-packages/transformers/models/t5/modeling_t5.py:253:0 %/block.1/layer.0/layer_norm/Constant_output_0 : Float(requires_grad=0, device=cpu) = onnx::Constant[value={2}, onnx_name="/block.1/layer.0/layer_norm/Constant"](), scope: transformers.models.t5.modeling_t5.T5Stack::/transformers.models.t5.modeling_t5.T5Block::block.1/transformers.models.t5.modeling_t5.T5LayerSelfAttention::layer.0/transformers.models.t5.modeling_t5.T5LayerNorm::layer_norm # /apps/bin/miniconda3/envs/on-device-llm/lib/python3.10/site-packages/transformers/models/t5/modeling_t5.py:253:0 %/block.1/layer.0/layer_norm/Pow_output_0 : Float(*, *, 768, strides=[12288, 768, 1], requires_grad=0, device=cpu) = onnx::Pow[onnx_name="/block.1/layer.0/layer_norm/Pow"](%/block.1/layer.0/layer_norm/Cast_output_0, %/block.1/layer.0/layer_norm/Constant_output_0), scope: transformers.models.t5.modeling_t5.T5Stack::/transformers.models.t5.modeling_t5.T5Block::block.1/transformers.models.t5.modeling_t5.T5LayerSelfAttention::layer.0/transformers.models.t5.modeling_t5.T5LayerNorm::layer_norm # /apps/bin/miniconda3/envs/on-device-llm/lib/python3.10/site-packages/transformers/models/t5/modeling_t5.py:253:0 [... TRUNCATED DUE TO Github limit...] %/block.11/layer.0/layer_norm/Cast_output_0 : Float(*, *, 768, strides=[12288, 768, 1], requires_grad=0, device=cpu) = onnx::Cast[to=1, onnx_name="/block.11/layer.0/layer_norm/Cast"](%/block.10/layer.1/Add_output_0), scope: transformers.models.t5.modeling_t5.T5Stack::/transformers.models.t5.modeling_t5.T5Block::block.11/transformers.models.t5.modeling_t5.T5LayerSelfAttention::layer.0/transformers.models.t5.modeling_t5.T5LayerNorm::layer_norm # /apps/bin/miniconda3/envs/on-device-llm/lib/python3.10/site-packages/transformers/models/t5/modeling_t5.py:253:0 %/block.11/layer.0/layer_norm/Constant_output_0 : Float(requires_grad=0, device=cpu) = onnx::Constant[value={2}, onnx_name="/block.11/layer.0/layer_norm/Constant"](), scope: transformers.models.t5.modeling_t5.T5Stack::/transformers.models.t5.modeling_t5.T5Block::block.11/transformers.models.t5.modeling_t5.T5LayerSelfAttention::layer.0/transformers.models.t5.modeling_t5.T5LayerNorm::layer_norm # /apps/bin/miniconda3/envs/on-device-llm/lib/python3.10/site-packages/transformers/models/t5/modeling_t5.py:253:0 %/block.11/layer.0/layer_norm/Pow_output_0 : Float(*, *, 768, strides=[12288, 768, 1], requires_grad=0, device=cpu) = onnx::Pow[onnx_name="/block.11/layer.0/layer_norm/Pow"](%/block.11/layer.0/layer_norm/Cast_output_0, %/block.11/layer.0/layer_norm/Constant_output_0), scope: transformers.models.t5.modeling_t5.T5Stack::/transformers.models.t5.modeling_t5.T5Block::block.11/transformers.models.t5.modeling_t5.T5LayerSelfAttention::layer.0/transformers.models.t5.modeling_t5.T5LayerNorm::layer_norm # /apps/bin/miniconda3/envs/on-device-llm/lib/python3.10/site-packages/transformers/models/t5/modeling_t5.py:253:0 %/block.11/layer.0/layer_norm/ReduceMean_output_0 : Float(*, *, 1, strides=[16, 1, 1], requires_grad=0, device=cpu) = onnx::ReduceMean[axes=[-1], keepdims=1, onnx_name="/block.11/layer.0/layer_norm/ReduceMean"](%/block.11/layer.0/layer_norm/Pow_output_0), scope: transformers.models.t5.modeling_t5.T5Stack::/transformers.models.t5.modeling_t5.T5Block::block.11/transformers.models.t5.modeling_t5.T5LayerSelfAttention::layer.0/transformers.models.t5.modeling_t5.T5LayerNorm::layer_norm # /apps/bin/miniconda3/envs/on-device-llm/lib/python3.10/site-packages/transformers/models/t5/modeling_t5.py:253:0 %/block.11/layer.0/layer_norm/Constant_1_output_0 : Float(requires_grad=0, device=cpu) = onnx::Constant[value={1e-06}, onnx_name="/block.11/layer.0/layer_norm/Constant_1"](), scope: transformers.models.t5.modeling_t5.T5Stack::/transformers.models.t5.modeling_t5.T5Block::block.11/transformers.models.t5.modeling_t5.T5LayerSelfAttention::layer.0/transformers.models.t5.modeling_t5.T5LayerNorm::layer_norm # /apps/bin/miniconda3/envs/on-device-llm/lib/python3.10/site-packages/transformers/models/t5/modeling_t5.py:254:0 %/block.11/layer.0/layer_norm/Add_output_0 : Float(*, *, 1, strides=[16, 1, 1], requires_grad=0, device=cpu) = onnx::Add[onnx_name="/block.11/layer.0/layer_norm/Add"](%/block.11/layer.0/layer_norm/ReduceMean_output_0, %/block.11/layer.0/layer_norm/Constant_1_output_0), scope: transformers.models.t5.modeling_t5.T5Stack::/transformers.models.t5.modeling_t5.T5Block::block.11/transformers.models.t5.modeling_t5.T5LayerSelfAttention::layer.0/transformers.models.t5.modeling_t5.T5LayerNorm::layer_norm # /apps/bin/miniconda3/envs/on-device-llm/lib/python3.10/site-packages/transformers/models/t5/modeling_t5.py:254:0 %/block.11/layer.0/layer_norm/Sqrt_output_0 : Float(*, *, 1, device=cpu) = onnx::Sqrt[onnx_name="/block.11/layer.0/layer_norm/Sqrt"](%/block.11/layer.0/layer_norm/Add_output_0), scope: transformers.models.t5.modeling_t5.T5Stack::/transformers.models.t5.modeling_t5.T5Block::block.11/transformers.models.t5.modeling_t5.T5LayerSelfAttention::layer.0/transformers.models.t5.modeling_t5.T5LayerNorm::layer_norm # /apps/bin/miniconda3/envs/on-device-llm/lib/python3.10/site-packages/transformers/models/t5/modeling_t5.py:254:0 %/block.11/layer.0/layer_norm/Constant_2_output_0 : Float(requires_grad=0, device=cpu) = onnx::Constant[value={1}, onnx_name="/block.11/layer.0/layer_norm/Constant_2"](), scope: transformers.models.t5.modeling_t5.T5Stack::/transformers.models.t5.modeling_t5.T5Block::block.11/transformers.models.t5.modeling_t5.T5LayerSelfAttention::layer.0/transformers.models.t5.modeling_t5.T5LayerNorm::layer_norm # /apps/bin/miniconda3/envs/on-device-llm/lib/python3.10/site-packages/transformers/models/t5/modeling_t5.py:254:0 %/block.11/layer.0/layer_norm/Div_output_0 : Float(*, *, 1, strides=[16, 1, 1], requires_grad=0, device=cpu) = onnx::Div[onnx_name="/block.11/layer.0/layer_norm/Div"](%/block.11/layer.0/layer_norm/Constant_2_output_0, %/block.11/layer.0/layer_norm/Sqrt_output_0), scope: transformers.models.t5.modeling_t5.T5Stack::/transformers.models.t5.modeling_t5.T5Block::block.11/transformers.models.t5.modeling_t5.T5LayerSelfAttention::layer.0/transformers.models.t5.modeling_t5.T5LayerNorm::layer_norm # /apps/bin/miniconda3/envs/on-device-llm/lib/python3.10/site-packages/transformers/models/t5/modeling_t5.py:254:0 %/block.11/layer.0/layer_norm/Mul_output_0 : Float(*, *, 768, strides=[12288, 768, 1], requires_grad=0, device=cpu) = onnx::Mul[onnx_name="/block.11/layer.0/layer_norm/Mul"](%/block.11/layer.0/layer_norm/Cast_output_0, %/block.11/layer.0/layer_norm/Div_output_0), scope: transformers.models.t5.modeling_t5.T5Stack::/transformers.models.t5.modeling_t5.T5Block::block.11/transformers.models.t5.modeling_t5.T5LayerSelfAttention::layer.0/transformers.models.t5.modeling_t5.T5LayerNorm::layer_norm # /apps/bin/miniconda3/envs/on-device-llm/lib/python3.10/site-packages/transformers/models/t5/modeling_t5.py:254:0 %/block.11/layer.0/layer_norm/Mul_1_output_0 : Float(*, *, 768, strides=[12288, 768, 1], requires_grad=0, device=cpu) = onnx::Mul[onnx_name="/block.11/layer.0/layer_norm/Mul_1"](%block.11.layer.0.layer_norm.weight, %/block.11/layer.0/layer_norm/Mul_output_0), scope: transformers.models.t5.modeling_t5.T5Stack::/transformers.models.t5.modeling_t5.T5Block::block.11/transformers.models.t5.modeling_t5.T5LayerSelfAttention::layer.0/transformers.models.t5.modeling_t5.T5LayerNorm::layer_norm # /apps/bin/miniconda3/envs/on-device-llm/lib/python3.10/site-packages/transformers/models/t5/modeling_t5.py:260:0 %/block.11/layer.0/SelfAttention/Shape_output_0 : Long(3, strides=[1], device=cpu) = onnx::Shape[onnx_name="/block.11/layer.0/SelfAttention/Shape"](%/block.11/layer.0/layer_norm/Mul_1_output_0), scope: transformers.models.t5.modeling_t5.T5Stack::/transformers.models.t5.modeling_t5.T5Block::block.11/transformers.models.t5.modeling_t5.T5LayerSelfAttention::layer.0/transformers.models.t5.modeling_t5.T5Attention::SelfAttention # /apps/bin/miniconda3/envs/on-device-llm/lib/python3.10/site-packages/transformers/models/t5/modeling_t5.py:471:0 %/block.11/layer.0/SelfAttention/Constant_output_0 : Long(device=cpu) = onnx::Constant[value={0}, onnx_name="/block.11/layer.0/SelfAttention/Constant"](), scope: transformers.models.t5.modeling_t5.T5Stack::/transformers.models.t5.modeling_t5.T5Block::block.11/transformers.models.t5.modeling_t5.T5LayerSelfAttention::layer.0/transformers.models.t5.modeling_t5.T5Attention::SelfAttention # /apps/bin/miniconda3/envs/on-device-llm/lib/python3.10/site-packages/transformers/models/t5/modeling_t5.py:471:0 %/block.11/layer.0/SelfAttention/Gather_output_0 : Long(device=cpu) = onnx::Gather[axis=0, onnx_name="/block.11/layer.0/SelfAttention/Gather"](%/block.11/layer.0/SelfAttention/Shape_output_0, %/block.11/layer.0/SelfAttention/Constant_output_0), scope: transformers.models.t5.modeling_t5.T5Stack::/transformers.models.t5.modeling_t5.T5Block::block.11/transformers.models.t5.modeling_t5.T5LayerSelfAttention::layer.0/transformers.models.t5.modeling_t5.T5Attention::SelfAttention # /apps/bin/miniconda3/envs/on-device-llm/lib/python3.10/site-packages/transformers/models/t5/modeling_t5.py:471:0 %/block.11/layer.0/SelfAttention/q/MatMul_output_0 : Float(*, *, 768, strides=[12288, 768, 1], requires_grad=0, device=cpu) = onnx::MatMul[onnx_name="/block.11/layer.0/SelfAttention/q/MatMul"](%/block.11/layer.0/layer_norm/Mul_1_output_0, %onnx::MatMul_1251), scope: transformers.models.t5.modeling_t5.T5Stack::/transformers.models.t5.modeling_t5.T5Block::block.11/transformers.models.t5.modeling_t5.T5LayerSelfAttention::layer.0/transformers.models.t5.modeling_t5.T5Attention::SelfAttention/torch.nn.modules.linear.Linear::q # /apps/bin/miniconda3/envs/on-device-llm/lib/python3.10/site-packages/torch/nn/modules/linear.py:114:0 %/block.11/layer.0/SelfAttention/Unsqueeze_output_0 : Long(1, strides=[1], device=cpu) = onnx::Unsqueeze[axes=[0], onnx_name="/block.11/layer.0/SelfAttention/Unsqueeze"](%/block.11/layer.0/SelfAttention/Gather_output_0), scope: transformers.models.t5.modeling_t5.T5Stack::/transformers.models.t5.modeling_t5.T5Block::block.11/transformers.models.t5.modeling_t5.T5LayerSelfAttention::layer.0/transformers.models.t5.modeling_t5.T5Attention::SelfAttention %/block.11/layer.0/SelfAttention/Constant_1_output_0 : Long(1, strides=[1], requires_grad=0, device=cpu) = onnx::Constant[value={-1}, onnx_name="/block.11/layer.0/SelfAttention/Constant_1"](), scope: transformers.models.t5.modeling_t5.T5Stack::/transformers.models.t5.modeling_t5.T5Block::block.11/transformers.models.t5.modeling_t5.T5LayerSelfAttention::layer.0/transformers.models.t5.modeling_t5.T5Attention::SelfAttention %/block.11/layer.0/SelfAttention/Constant_2_output_0 : Long(1, strides=[1], requires_grad=0, device=cpu) = onnx::Constant[value={12}, onnx_name="/block.11/layer.0/SelfAttention/Constant_2"](), scope: transformers.models.t5.modeling_t5.T5Stack::/transformers.models.t5.modeling_t5.T5Block::block.11/transformers.models.t5.modeling_t5.T5LayerSelfAttention::layer.0/transformers.models.t5.modeling_t5.T5Attention::SelfAttention %/block.11/layer.0/SelfAttention/Constant_3_output_0 : Long(1, strides=[1], requires_grad=0, device=cpu) = onnx::Constant[value={64}, onnx_name="/block.11/layer.0/SelfAttention/Constant_3"](), scope: transformers.models.t5.modeling_t5.T5Stack::/transformers.models.t5.modeling_t5.T5Block::block.11/transformers.models.t5.modeling_t5.T5LayerSelfAttention::layer.0/transformers.models.t5.modeling_t5.T5Attention::SelfAttention %/block.11/layer.0/SelfAttention/Concat_output_0 : Long(4, strides=[1], device=cpu) = onnx::Concat[axis=0, onnx_name="/block.11/layer.0/SelfAttention/Concat"](%/block.11/layer.0/SelfAttention/Unsqueeze_output_0, %/block.11/layer.0/SelfAttention/Constant_1_output_0, %/block.11/layer.0/SelfAttention/Constant_2_output_0, %/block.11/layer.0/SelfAttention/Constant_3_output_0), scope: transformers.models.t5.modeling_t5.T5Stack::/transformers.models.t5.modeling_t5.T5Block::block.11/transformers.models.t5.modeling_t5.T5LayerSelfAttention::layer.0/transformers.models.t5.modeling_t5.T5Attention::SelfAttention # /apps/bin/miniconda3/envs/on-device-llm/lib/python3.10/site-packages/transformers/models/t5/modeling_t5.py:485:0 %/block.11/layer.0/SelfAttention/Unsqueeze_1_output_0 : Long(1, strides=[1], device=cpu) = onnx::Unsqueeze[axes=[0], onnx_name="/block.11/layer.0/SelfAttention/Unsqueeze_1"](%/block.11/layer.0/SelfAttention/Gather_output_0), scope: transformers.models.t5.modeling_t5.T5Stack::/transformers.models.t5.modeling_t5.T5Block::block.11/transformers.models.t5.modeling_t5.T5LayerSelfAttention::layer.0/transformers.models.t5.modeling_t5.T5Attention::SelfAttention %/block.11/layer.0/SelfAttention/Constant_4_output_0 : Long(1, strides=[1], requires_grad=0, device=cpu) = onnx::Constant[value={-1}, onnx_name="/block.11/layer.0/SelfAttention/Constant_4"](), scope: transformers.models.t5.modeling_t5.T5Stack::/transformers.models.t5.modeling_t5.T5Block::block.11/transformers.models.t5.modeling_t5.T5LayerSelfAttention::layer.0/transformers.models.t5.modeling_t5.T5Attention::SelfAttention %/block.11/layer.0/SelfAttention/Constant_5_output_0 : Long(1, strides=[1], requires_grad=0, device=cpu) = onnx::Constant[value={12}, onnx_name="/block.11/layer.0/SelfAttention/Constant_5"](), scope: transformers.models.t5.modeling_t5.T5Stack::/transformers.models.t5.modeling_t5.T5Block::block.11/transformers.models.t5.modeling_t5.T5LayerSelfAttention::layer.0/transformers.models.t5.modeling_t5.T5Attention::SelfAttention %/block.11/layer.0/SelfAttention/Constant_6_output_0 : Long(1, strides=[1], requires_grad=0, device=cpu) = onnx::Constant[value={64}, onnx_name="/block.11/layer.0/SelfAttention/Constant_6"](), scope: transformers.models.t5.modeling_t5.T5Stack::/transformers.models.t5.modeling_t5.T5Block::block.11/transformers.models.t5.modeling_t5.T5LayerSelfAttention::layer.0/transformers.models.t5.modeling_t5.T5Attention::SelfAttention %/block.11/layer.0/SelfAttention/Concat_1_output_0 : Long(4, strides=[1], device=cpu) = onnx::Concat[axis=0, onnx_name="/block.11/layer.0/SelfAttention/Concat_1"](%/block.11/layer.0/SelfAttention/Unsqueeze_1_output_0, %/block.11/layer.0/SelfAttention/Constant_4_output_0, %/block.11/layer.0/SelfAttention/Constant_5_output_0, %/block.11/layer.0/SelfAttention/Constant_6_output_0), scope: transformers.models.t5.modeling_t5.T5Stack::/transformers.models.t5.modeling_t5.T5Block::block.11/transformers.models.t5.modeling_t5.T5LayerSelfAttention::layer.0/transformers.models.t5.modeling_t5.T5Attention::SelfAttention # /apps/bin/miniconda3/envs/on-device-llm/lib/python3.10/site-packages/transformers/models/t5/modeling_t5.py:485:0 %/block.11/layer.0/SelfAttention/Unsqueeze_2_output_0 : Long(1, strides=[1], device=cpu) = onnx::Unsqueeze[axes=[0], onnx_name="/block.11/layer.0/SelfAttention/Unsqueeze_2"](%/block.11/layer.0/SelfAttention/Gather_output_0), scope: transformers.models.t5.modeling_t5.T5Stack::/transformers.models.t5.modeling_t5.T5Block::block.11/transformers.models.t5.modeling_t5.T5LayerSelfAttention::layer.0/transformers.models.t5.modeling_t5.T5Attention::SelfAttention %/block.11/layer.0/SelfAttention/Constant_7_output_0 : Long(1, strides=[1], requires_grad=0, device=cpu) = onnx::Constant[value={-1}, onnx_name="/block.11/layer.0/SelfAttention/Constant_7"](), scope: transformers.models.t5.modeling_t5.T5Stack::/transformers.models.t5.modeling_t5.T5Block::block.11/transformers.models.t5.modeling_t5.T5LayerSelfAttention::layer.0/transformers.models.t5.modeling_t5.T5Attention::SelfAttention %/block.11/layer.0/SelfAttention/Constant_8_output_0 : Long(1, strides=[1], requires_grad=0, device=cpu) = onnx::Constant[value={12}, onnx_name="/block.11/layer.0/SelfAttention/Constant_8"](), scope: transformers.models.t5.modeling_t5.T5Stack::/transformers.models.t5.modeling_t5.T5Block::block.11/transformers.models.t5.modeling_t5.T5LayerSelfAttention::layer.0/transformers.models.t5.modeling_t5.T5Attention::SelfAttention %/block.11/layer.0/SelfAttention/Constant_9_output_0 : Long(1, strides=[1], requires_grad=0, device=cpu) = onnx::Constant[value={64}, onnx_name="/block.11/layer.0/SelfAttention/Constant_9"](), scope: transformers.models.t5.modeling_t5.T5Stack::/transformers.models.t5.modeling_t5.T5Block::block.11/transformers.models.t5.modeling_t5.T5LayerSelfAttention::layer.0/transformers.models.t5.modeling_t5.T5Attention::SelfAttention %/block.11/layer.0/SelfAttention/Concat_2_output_0 : Long(4, strides=[1], device=cpu) = onnx::Concat[axis=0, onnx_name="/block.11/layer.0/SelfAttention/Concat_2"](%/block.11/layer.0/SelfAttention/Unsqueeze_2_output_0, %/block.11/layer.0/SelfAttention/Constant_7_output_0, %/block.11/layer.0/SelfAttention/Constant_8_output_0, %/block.11/layer.0/SelfAttention/Constant_9_output_0), scope: transformers.models.t5.modeling_t5.T5Stack::/transformers.models.t5.modeling_t5.T5Block::block.11/transformers.models.t5.modeling_t5.T5LayerSelfAttention::layer.0/transformers.models.t5.modeling_t5.T5Attention::SelfAttention # /apps/bin/miniconda3/envs/on-device-llm/lib/python3.10/site-packages/transformers/models/t5/modeling_t5.py:485:0 %/block.11/layer.0/SelfAttention/Reshape_output_0 : Float(*, *, *, *, strides=[12288, 768, 64, 1], requires_grad=0, device=cpu) = onnx::Reshape[onnx_name="/block.11/layer.0/SelfAttention/Reshape"](%/block.11/layer.0/SelfAttention/q/MatMul_output_0, %/block.11/layer.0/SelfAttention/Concat_output_0), scope: transformers.models.t5.modeling_t5.T5Stack::/transformers.models.t5.modeling_t5.T5Block::block.11/transformers.models.t5.modeling_t5.T5LayerSelfAttention::layer.0/transformers.models.t5.modeling_t5.T5Attention::SelfAttention # /apps/bin/miniconda3/envs/on-device-llm/lib/python3.10/site-packages/transformers/models/t5/modeling_t5.py:485:0 %/block.11/layer.0/SelfAttention/Transpose_output_0 : Float(*, *, *, *, strides=[12288, 64, 768, 1], requires_grad=0, device=cpu) = onnx::Transpose[perm=[0, 2, 1, 3], onnx_name="/block.11/layer.0/SelfAttention/Transpose"](%/block.11/layer.0/SelfAttention/Reshape_output_0), scope: transformers.models.t5.modeling_t5.T5Stack::/transformers.models.t5.modeling_t5.T5Block::block.11/transformers.models.t5.modeling_t5.T5LayerSelfAttention::layer.0/transformers.models.t5.modeling_t5.T5Attention::SelfAttention # /apps/bin/miniconda3/envs/on-device-llm/lib/python3.10/site-packages/transformers/models/t5/modeling_t5.py:485:0 %/block.11/layer.0/SelfAttention/k/MatMul_output_0 : Float(*, *, 768, strides=[12288, 768, 1], requires_grad=0, device=cpu) = onnx::MatMul[onnx_name="/block.11/layer.0/SelfAttention/k/MatMul"](%/block.11/layer.0/layer_norm/Mul_1_output_0, %onnx::MatMul_1261), scope: transformers.models.t5.modeling_t5.T5Stack::/transformers.models.t5.modeling_t5.T5Block::block.11/transformers.models.t5.modeling_t5.T5LayerSelfAttention::layer.0/transformers.models.t5.modeling_t5.T5Attention::SelfAttention/torch.nn.modules.linear.Linear::k # /apps/bin/miniconda3/envs/on-device-llm/lib/python3.10/site-packages/torch/nn/modules/linear.py:114:0 %/block.11/layer.0/SelfAttention/Reshape_1_output_0 : Float(*, *, *, *, strides=[12288, 768, 64, 1], requires_grad=0, device=cpu) = onnx::Reshape[onnx_name="/block.11/layer.0/SelfAttention/Reshape_1"](%/block.11/layer.0/SelfAttention/k/MatMul_output_0, %/block.11/layer.0/SelfAttention/Concat_1_output_0), scope: transformers.models.t5.modeling_t5.T5Stack::/transformers.models.t5.modeling_t5.T5Block::block.11/transformers.models.t5.modeling_t5.T5LayerSelfAttention::layer.0/transformers.models.t5.modeling_t5.T5Attention::SelfAttention # /apps/bin/miniconda3/envs/on-device-llm/lib/python3.10/site-packages/transformers/models/t5/modeling_t5.py:485:0 %/block.11/layer.0/SelfAttention/v/MatMul_output_0 : Float(*, *, 768, strides=[12288, 768, 1], requires_grad=0, device=cpu) = onnx::MatMul[onnx_name="/block.11/layer.0/SelfAttention/v/MatMul"](%/block.11/layer.0/layer_norm/Mul_1_output_0, %onnx::MatMul_1262), scope: transformers.models.t5.modeling_t5.T5Stack::/transformers.models.t5.modeling_t5.T5Block::block.11/transformers.models.t5.modeling_t5.T5LayerSelfAttention::layer.0/transformers.models.t5.modeling_t5.T5Attention::SelfAttention/torch.nn.modules.linear.Linear::v # /apps/bin/miniconda3/envs/on-device-llm/lib/python3.10/site-packages/torch/nn/modules/linear.py:114:0 %/block.11/layer.0/SelfAttention/Reshape_2_output_0 : Float(*, *, *, *, strides=[12288, 768, 64, 1], requires_grad=0, device=cpu) = onnx::Reshape[onnx_name="/block.11/layer.0/SelfAttention/Reshape_2"](%/block.11/layer.0/SelfAttention/v/MatMul_output_0, %/block.11/layer.0/SelfAttention/Concat_2_output_0), scope: transformers.models.t5.modeling_t5.T5Stack::/transformers.models.t5.modeling_t5.T5Block::block.11/transformers.models.t5.modeling_t5.T5LayerSelfAttention::layer.0/transformers.models.t5.modeling_t5.T5Attention::SelfAttention # /apps/bin/miniconda3/envs/on-device-llm/lib/python3.10/site-packages/transformers/models/t5/modeling_t5.py:485:0 %/block.11/layer.0/SelfAttention/Transpose_1_output_0 : Float(*, *, *, *, strides=[12288, 64, 768, 1], requires_grad=0, device=cpu) = onnx::Transpose[perm=[0, 2, 1, 3], onnx_name="/block.11/layer.0/SelfAttention/Transpose_1"](%/block.11/layer.0/SelfAttention/Reshape_2_output_0), scope: transformers.models.t5.modeling_t5.T5Stack::/transformers.models.t5.modeling_t5.T5Block::block.11/transformers.models.t5.modeling_t5.T5LayerSelfAttention::layer.0/transformers.models.t5.modeling_t5.T5Attention::SelfAttention # /apps/bin/miniconda3/envs/on-device-llm/lib/python3.10/site-packages/transformers/models/t5/modeling_t5.py:485:0 %/block.11/layer.0/SelfAttention/Transpose_2_output_0 : Float(*, *, *, *, strides=[12288, 64, 1, 768], requires_grad=0, device=cpu) = onnx::Transpose[perm=[0, 2, 3, 1], onnx_name="/block.11/layer.0/SelfAttention/Transpose_2"](%/block.11/layer.0/SelfAttention/Reshape_1_output_0), scope: transformers.models.t5.modeling_t5.T5Stack::/transformers.models.t5.modeling_t5.T5Block::block.11/transformers.models.t5.modeling_t5.T5LayerSelfAttention::layer.0/transformers.models.t5.modeling_t5.T5Attention::SelfAttention # /apps/bin/miniconda3/envs/on-device-llm/lib/python3.10/site-packages/transformers/models/t5/modeling_t5.py:531:0 %/block.11/layer.0/SelfAttention/MatMul_output_0 : Float(*, *, *, *, strides=[3072, 256, 16, 1], requires_grad=0, device=cpu) = onnx::MatMul[onnx_name="/block.11/layer.0/SelfAttention/MatMul"](%/block.11/layer.0/SelfAttention/Transpose_output_0, %/block.11/layer.0/SelfAttention/Transpose_2_output_0), scope: transformers.models.t5.modeling_t5.T5Stack::/transformers.models.t5.modeling_t5.T5Block::block.11/transformers.models.t5.modeling_t5.T5LayerSelfAttention::layer.0/transformers.models.t5.modeling_t5.T5Attention::SelfAttention # /apps/bin/miniconda3/envs/on-device-llm/lib/python3.10/site-packages/transformers/models/t5/modeling_t5.py:530:0 %/block.11/layer.0/SelfAttention/Add_output_0 : Float(*, *, *, *, strides=[3072, 256, 16, 1], requires_grad=0, device=cpu) = onnx::Add[onnx_name="/block.11/layer.0/SelfAttention/Add"](%/block.11/layer.0/SelfAttention/MatMul_output_0, %/block.0/layer.0/SelfAttention/Add_3_output_0), scope: transformers.models.t5.modeling_t5.T5Stack::/transformers.models.t5.modeling_t5.T5Block::block.11/transformers.models.t5.modeling_t5.T5LayerSelfAttention::layer.0/transformers.models.t5.modeling_t5.T5Attention::SelfAttention # /apps/bin/miniconda3/envs/on-device-llm/lib/python3.10/site-packages/transformers/models/t5/modeling_t5.py:559:0 %/block.11/layer.0/SelfAttention/Cast_output_0 : Float(*, *, *, *, strides=[3072, 256, 16, 1], requires_grad=0, device=cpu) = onnx::Cast[to=1, onnx_name="/block.11/layer.0/SelfAttention/Cast"](%/block.11/layer.0/SelfAttention/Add_output_0), scope: transformers.models.t5.modeling_t5.T5Stack::/transformers.models.t5.modeling_t5.T5Block::block.11/transformers.models.t5.modeling_t5.T5LayerSelfAttention::layer.0/transformers.models.t5.modeling_t5.T5Attention::SelfAttention # /apps/bin/miniconda3/envs/on-device-llm/lib/python3.10/site-packages/transformers/models/t5/modeling_t5.py:560:0 %/block.11/layer.0/SelfAttention/Softmax_output_0 : Float(*, *, *, *, strides=[3072, 256, 16, 1], requires_grad=0, device=cpu) = onnx::Softmax[axis=3, onnx_name="/block.11/layer.0/SelfAttention/Softmax"](%/block.11/layer.0/SelfAttention/Cast_output_0), scope: transformers.models.t5.modeling_t5.T5Stack::/transformers.models.t5.modeling_t5.T5Block::block.11/transformers.models.t5.modeling_t5.T5LayerSelfAttention::layer.0/transformers.models.t5.modeling_t5.T5Attention::SelfAttention # /apps/bin/miniconda3/envs/on-device-llm/lib/python3.10/site-packages/torch/nn/functional.py:1843:0 %/block.11/layer.0/SelfAttention/MatMul_1_output_0 : Float(*, *, *, *, strides=[12288, 1024, 64, 1], requires_grad=0, device=cpu) = onnx::MatMul[onnx_name="/block.11/layer.0/SelfAttention/MatMul_1"](%/block.11/layer.0/SelfAttention/Softmax_output_0, %/block.11/layer.0/SelfAttention/Transpose_1_output_0), scope: transformers.models.t5.modeling_t5.T5Stack::/transformers.models.t5.modeling_t5.T5Block::block.11/transformers.models.t5.modeling_t5.T5LayerSelfAttention::layer.0/transformers.models.t5.modeling_t5.T5Attention::SelfAttention # /apps/bin/miniconda3/envs/on-device-llm/lib/python3.10/site-packages/transformers/models/t5/modeling_t5.py:571:0 %/block.11/layer.0/SelfAttention/Transpose_3_output_0 : Float(*, *, *, *, strides=[12288, 768, 64, 1], requires_grad=0, device=cpu) = onnx::Transpose[perm=[0, 2, 1, 3], onnx_name="/block.11/layer.0/SelfAttention/Transpose_3"](%/block.11/layer.0/SelfAttention/MatMul_1_output_0), scope: transformers.models.t5.modeling_t5.T5Stack::/transformers.models.t5.modeling_t5.T5Block::block.11/transformers.models.t5.modeling_t5.T5LayerSelfAttention::layer.0/transformers.models.t5.modeling_t5.T5Attention::SelfAttention # /apps/bin/miniconda3/envs/on-device-llm/lib/python3.10/site-packages/transformers/models/t5/modeling_t5.py:489:0 %/block.11/layer.0/SelfAttention/Unsqueeze_3_output_0 : Long(1, strides=[1], device=cpu) = onnx::Unsqueeze[axes=[0], onnx_name="/block.11/layer.0/SelfAttention/Unsqueeze_3"](%/block.11/layer.0/SelfAttention/Gather_output_0), scope: transformers.models.t5.modeling_t5.T5Stack::/transformers.models.t5.modeling_t5.T5Block::block.11/transformers.models.t5.modeling_t5.T5LayerSelfAttention::layer.0/transformers.models.t5.modeling_t5.T5Attention::SelfAttention %/block.11/layer.0/SelfAttention/Constant_10_output_0 : Long(1, strides=[1], requires_grad=0, device=cpu) = onnx::Constant[value={-1}, onnx_name="/block.11/layer.0/SelfAttention/Constant_10"](), scope: transformers.models.t5.modeling_t5.T5Stack::/transformers.models.t5.modeling_t5.T5Block::block.11/transformers.models.t5.modeling_t5.T5LayerSelfAttention::layer.0/transformers.models.t5.modeling_t5.T5Attention::SelfAttention %/block.11/layer.0/SelfAttention/Constant_11_output_0 : Long(1, strides=[1], requires_grad=0, device=cpu) = onnx::Constant[value={768}, onnx_name="/block.11/layer.0/SelfAttention/Constant_11"](), scope: transformers.models.t5.modeling_t5.T5Stack::/transformers.models.t5.modeling_t5.T5Block::block.11/transformers.models.t5.modeling_t5.T5LayerSelfAttention::layer.0/transformers.models.t5.modeling_t5.T5Attention::SelfAttention %/block.11/layer.0/SelfAttention/Concat_3_output_0 : Long(3, strides=[1], device=cpu) = onnx::Concat[axis=0, onnx_name="/block.11/layer.0/SelfAttention/Concat_3"](%/block.11/layer.0/SelfAttention/Unsqueeze_3_output_0, %/block.11/layer.0/SelfAttention/Constant_10_output_0, %/block.11/layer.0/SelfAttention/Constant_11_output_0), scope: transformers.models.t5.modeling_t5.T5Stack::/transformers.models.t5.modeling_t5.T5Block::block.11/transformers.models.t5.modeling_t5.T5LayerSelfAttention::layer.0/transformers.models.t5.modeling_t5.T5Attention::SelfAttention # /apps/bin/miniconda3/envs/on-device-llm/lib/python3.10/site-packages/transformers/models/t5/modeling_t5.py:489:0 %/block.11/layer.0/SelfAttention/Reshape_3_output_0 : Float(*, *, *, strides=[12288, 768, 1], requires_grad=0, device=cpu) = onnx::Reshape[onnx_name="/block.11/layer.0/SelfAttention/Reshape_3"](%/block.11/layer.0/SelfAttention/Transpose_3_output_0, %/block.11/layer.0/SelfAttention/Concat_3_output_0), scope: transformers.models.t5.modeling_t5.T5Stack::/transformers.models.t5.modeling_t5.T5Block::block.11/transformers.models.t5.modeling_t5.T5LayerSelfAttention::layer.0/transformers.models.t5.modeling_t5.T5Attention::SelfAttention # /apps/bin/miniconda3/envs/on-device-llm/lib/python3.10/site-packages/transformers/models/t5/modeling_t5.py:489:0 %/block.11/layer.0/SelfAttention/o/MatMul_output_0 : Float(*, *, 768, strides=[12288, 768, 1], requires_grad=0, device=cpu) = onnx::MatMul[onnx_name="/block.11/layer.0/SelfAttention/o/MatMul"](%/block.11/layer.0/SelfAttention/Reshape_3_output_0, %onnx::MatMul_1265), scope: transformers.models.t5.modeling_t5.T5Stack::/transformers.models.t5.modeling_t5.T5Block::block.11/transformers.models.t5.modeling_t5.T5LayerSelfAttention::layer.0/transformers.models.t5.modeling_t5.T5Attention::SelfAttention/torch.nn.modules.linear.Linear::o # /apps/bin/miniconda3/envs/on-device-llm/lib/python3.10/site-packages/torch/nn/modules/linear.py:114:0 %/block.11/layer.0/Add_output_0 : Float(*, *, 768, strides=[12288, 768, 1], requires_grad=0, device=cpu) = onnx::Add[onnx_name="/block.11/layer.0/Add"](%/block.11/layer.0/layer_norm/Cast_output_0, %/block.11/layer.0/SelfAttention/o/MatMul_output_0), scope: transformers.models.t5.modeling_t5.T5Stack::/transformers.models.t5.modeling_t5.T5Block::block.11/transformers.models.t5.modeling_t5.T5LayerSelfAttention::layer.0 # /apps/bin/miniconda3/envs/on-device-llm/lib/python3.10/site-packages/transformers/models/t5/modeling_t5.py:609:0 %/block.11/layer.1/layer_norm/Cast_output_0 : Float(*, *, 768, strides=[12288, 768, 1], requires_grad=0, device=cpu) = onnx::Cast[to=1, onnx_name="/block.11/layer.1/layer_norm/Cast"](%/block.11/layer.0/Add_output_0), scope: transformers.models.t5.modeling_t5.T5Stack::/transformers.models.t5.modeling_t5.T5Block::block.11/transformers.models.t5.modeling_t5.T5LayerFF::layer.1/transformers.models.t5.modeling_t5.T5LayerNorm::layer_norm # /apps/bin/miniconda3/envs/on-device-llm/lib/python3.10/site-packages/transformers/models/t5/modeling_t5.py:253:0 %/block.11/layer.1/layer_norm/Constant_output_0 : Float(requires_grad=0, device=cpu) = onnx::Constant[value={2}, onnx_name="/block.11/layer.1/layer_norm/Constant"](), scope: transformers.models.t5.modeling_t5.T5Stack::/transformers.models.t5.modeling_t5.T5Block::block.11/transformers.models.t5.modeling_t5.T5LayerFF::layer.1/transformers.models.t5.modeling_t5.T5LayerNorm::layer_norm # /apps/bin/miniconda3/envs/on-device-llm/lib/python3.10/site-packages/transformers/models/t5/modeling_t5.py:253:0 %/block.11/layer.1/layer_norm/Pow_output_0 : Float(*, *, 768, strides=[12288, 768, 1], requires_grad=0, device=cpu) = onnx::Pow[onnx_name="/block.11/layer.1/layer_norm/Pow"](%/block.11/layer.1/layer_norm/Cast_output_0, %/block.11/layer.1/layer_norm/Constant_output_0), scope: transformers.models.t5.modeling_t5.T5Stack::/transformers.models.t5.modeling_t5.T5Block::block.11/transformers.models.t5.modeling_t5.T5LayerFF::layer.1/transformers.models.t5.modeling_t5.T5LayerNorm::layer_norm # /apps/bin/miniconda3/envs/on-device-llm/lib/python3.10/site-packages/transformers/models/t5/modeling_t5.py:253:0 %/block.11/layer.1/layer_norm/ReduceMean_output_0 : Float(*, *, 1, strides=[16, 1, 1], requires_grad=0, device=cpu) = onnx::ReduceMean[axes=[-1], keepdims=1, onnx_name="/block.11/layer.1/layer_norm/ReduceMean"](%/block.11/layer.1/layer_norm/Pow_output_0), scope: transformers.models.t5.modeling_t5.T5Stack::/transformers.models.t5.modeling_t5.T5Block::block.11/transformers.models.t5.modeling_t5.T5LayerFF::layer.1/transformers.models.t5.modeling_t5.T5LayerNorm::layer_norm # /apps/bin/miniconda3/envs/on-device-llm/lib/python3.10/site-packages/transformers/models/t5/modeling_t5.py:253:0 %/block.11/layer.1/layer_norm/Constant_1_output_0 : Float(requires_grad=0, device=cpu) = onnx::Constant[value={1e-06}, onnx_name="/block.11/layer.1/layer_norm/Constant_1"](), scope: transformers.models.t5.modeling_t5.T5Stack::/transformers.models.t5.modeling_t5.T5Block::block.11/transformers.models.t5.modeling_t5.T5LayerFF::layer.1/transformers.models.t5.modeling_t5.T5LayerNorm::layer_norm # /apps/bin/miniconda3/envs/on-device-llm/lib/python3.10/site-packages/transformers/models/t5/modeling_t5.py:254:0 %/block.11/layer.1/layer_norm/Add_output_0 : Float(*, *, 1, strides=[16, 1, 1], requires_grad=0, device=cpu) = onnx::Add[onnx_name="/block.11/layer.1/layer_norm/Add"](%/block.11/layer.1/layer_norm/ReduceMean_output_0, %/block.11/layer.1/layer_norm/Constant_1_output_0), scope: transformers.models.t5.modeling_t5.T5Stack::/transformers.models.t5.modeling_t5.T5Block::block.11/transformers.models.t5.modeling_t5.T5LayerFF::layer.1/transformers.models.t5.modeling_t5.T5LayerNorm::layer_norm # /apps/bin/miniconda3/envs/on-device-llm/lib/python3.10/site-packages/transformers/models/t5/modeling_t5.py:254:0 %/block.11/layer.1/layer_norm/Sqrt_output_0 : Float(*, *, 1, device=cpu) = onnx::Sqrt[onnx_name="/block.11/layer.1/layer_norm/Sqrt"](%/block.11/layer.1/layer_norm/Add_output_0), scope: transformers.models.t5.modeling_t5.T5Stack::/transformers.models.t5.modeling_t5.T5Block::block.11/transformers.models.t5.modeling_t5.T5LayerFF::layer.1/transformers.models.t5.modeling_t5.T5LayerNorm::layer_norm # /apps/bin/miniconda3/envs/on-device-llm/lib/python3.10/site-packages/transformers/models/t5/modeling_t5.py:254:0 %/block.11/layer.1/layer_norm/Constant_2_output_0 : Float(requires_grad=0, device=cpu) = onnx::Constant[value={1}, onnx_name="/block.11/layer.1/layer_norm/Constant_2"](), scope: transformers.models.t5.modeling_t5.T5Stack::/transformers.models.t5.modeling_t5.T5Block::block.11/transformers.models.t5.modeling_t5.T5LayerFF::layer.1/transformers.models.t5.modeling_t5.T5LayerNorm::layer_norm # /apps/bin/miniconda3/envs/on-device-llm/lib/python3.10/site-packages/transformers/models/t5/modeling_t5.py:254:0 %/block.11/layer.1/layer_norm/Div_output_0 : Float(*, *, 1, strides=[16, 1, 1], requires_grad=0, device=cpu) = onnx::Div[onnx_name="/block.11/layer.1/layer_norm/Div"](%/block.11/layer.1/layer_norm/Constant_2_output_0, %/block.11/layer.1/layer_norm/Sqrt_output_0), scope: transformers.models.t5.modeling_t5.T5Stack::/transformers.models.t5.modeling_t5.T5Block::block.11/transformers.models.t5.modeling_t5.T5LayerFF::layer.1/transformers.models.t5.modeling_t5.T5LayerNorm::layer_norm # /apps/bin/miniconda3/envs/on-device-llm/lib/python3.10/site-packages/transformers/models/t5/modeling_t5.py:254:0 %/block.11/layer.1/layer_norm/Mul_output_0 : Float(*, *, 768, strides=[12288, 768, 1], requires_grad=0, device=cpu) = onnx::Mul[onnx_name="/block.11/layer.1/layer_norm/Mul"](%/block.11/layer.1/layer_norm/Cast_output_0, %/block.11/layer.1/layer_norm/Div_output_0), scope: transformers.models.t5.modeling_t5.T5Stack::/transformers.models.t5.modeling_t5.T5Block::block.11/transformers.models.t5.modeling_t5.T5LayerFF::layer.1/transformers.models.t5.modeling_t5.T5LayerNorm::layer_norm # /apps/bin/miniconda3/envs/on-device-llm/lib/python3.10/site-packages/transformers/models/t5/modeling_t5.py:254:0 %/block.11/layer.1/layer_norm/Mul_1_output_0 : Float(*, *, 768, strides=[12288, 768, 1], requires_grad=0, device=cpu) = onnx::Mul[onnx_name="/block.11/layer.1/layer_norm/Mul_1"](%block.11.layer.1.layer_norm.weight, %/block.11/layer.1/layer_norm/Mul_output_0), scope: transformers.models.t5.modeling_t5.T5Stack::/transformers.models.t5.modeling_t5.T5Block::block.11/transformers.models.t5.modeling_t5.T5LayerFF::layer.1/transformers.models.t5.modeling_t5.T5LayerNorm::layer_norm # /apps/bin/miniconda3/envs/on-device-llm/lib/python3.10/site-packages/transformers/models/t5/modeling_t5.py:260:0 %/block.11/layer.1/DenseReluDense/wi/MatMul_output_0 : Float(*, *, 3072, strides=[49152, 3072, 1], requires_grad=0, device=cpu) = onnx::MatMul[onnx_name="/block.11/layer.1/DenseReluDense/wi/MatMul"](%/block.11/layer.1/layer_norm/Mul_1_output_0, %onnx::MatMul_1267), scope: transformers.models.t5.modeling_t5.T5Stack::/transformers.models.t5.modeling_t5.T5Block::block.11/transformers.models.t5.modeling_t5.T5LayerFF::layer.1/transformers.models.t5.modeling_t5.T5DenseActDense::DenseReluDense/torch.nn.modules.linear.Linear::wi # /apps/bin/miniconda3/envs/on-device-llm/lib/python3.10/site-packages/torch/nn/modules/linear.py:114:0 %/block.11/layer.1/DenseReluDense/act/Relu_output_0 : Float(*, *, 3072, strides=[49152, 3072, 1], requires_grad=0, device=cpu) = onnx::Relu[onnx_name="/block.11/layer.1/DenseReluDense/act/Relu"](%/block.11/layer.1/DenseReluDense/wi/MatMul_output_0), scope: transformers.models.t5.modeling_t5.T5Stack::/transformers.models.t5.modeling_t5.T5Block::block.11/transformers.models.t5.modeling_t5.T5LayerFF::layer.1/transformers.models.t5.modeling_t5.T5DenseActDense::DenseReluDense/torch.nn.modules.activation.ReLU::act # /apps/bin/miniconda3/envs/on-device-llm/lib/python3.10/site-packages/torch/nn/functional.py:1457:0 %/block.11/layer.1/DenseReluDense/wo/MatMul_output_0 : Float(*, *, 768, strides=[12288, 768, 1], requires_grad=0, device=cpu) = onnx::MatMul[onnx_name="/block.11/layer.1/DenseReluDense/wo/MatMul"](%/block.11/layer.1/DenseReluDense/act/Relu_output_0, %onnx::MatMul_1268), scope: transformers.models.t5.modeling_t5.T5Stack::/transformers.models.t5.modeling_t5.T5Block::block.11/transformers.models.t5.modeling_t5.T5LayerFF::layer.1/transformers.models.t5.modeling_t5.T5DenseActDense::DenseReluDense/torch.nn.modules.linear.Linear::wo # /apps/bin/miniconda3/envs/on-device-llm/lib/python3.10/site-packages/torch/nn/modules/linear.py:114:0 %/block.11/layer.1/Add_output_0 : Float(*, *, 768, strides=[12288, 768, 1], requires_grad=0, device=cpu) = onnx::Add[onnx_name="/block.11/layer.1/Add"](%/block.11/layer.1/layer_norm/Cast_output_0, %/block.11/layer.1/DenseReluDense/wo/MatMul_output_0), scope: transformers.models.t5.modeling_t5.T5Stack::/transformers.models.t5.modeling_t5.T5Block::block.11/transformers.models.t5.modeling_t5.T5LayerFF::layer.1 # /apps/bin/miniconda3/envs/on-device-llm/lib/python3.10/site-packages/transformers/models/t5/modeling_t5.py:344:0 %/final_layer_norm/Cast_output_0 : Float(*, *, 768, strides=[12288, 768, 1], requires_grad=0, device=cpu) = onnx::Cast[to=1, onnx_name="/final_layer_norm/Cast"](%/block.11/layer.1/Add_output_0), scope: transformers.models.t5.modeling_t5.T5Stack::/transformers.models.t5.modeling_t5.T5LayerNorm::final_layer_norm # /apps/bin/miniconda3/envs/on-device-llm/lib/python3.10/site-packages/transformers/models/t5/modeling_t5.py:253:0 %/final_layer_norm/Constant_output_0 : Float(requires_grad=0, device=cpu) = onnx::Constant[value={2}, onnx_name="/final_layer_norm/Constant"](), scope: transformers.models.t5.modeling_t5.T5Stack::/transformers.models.t5.modeling_t5.T5LayerNorm::final_layer_norm # /apps/bin/miniconda3/envs/on-device-llm/lib/python3.10/site-packages/transformers/models/t5/modeling_t5.py:253:0 %/final_layer_norm/Pow_output_0 : Float(*, *, 768, strides=[12288, 768, 1], requires_grad=0, device=cpu) = onnx::Pow[onnx_name="/final_layer_norm/Pow"](%/final_layer_norm/Cast_output_0, %/final_layer_norm/Constant_output_0), scope: transformers.models.t5.modeling_t5.T5Stack::/transformers.models.t5.modeling_t5.T5LayerNorm::final_layer_norm # /apps/bin/miniconda3/envs/on-device-llm/lib/python3.10/site-packages/transformers/models/t5/modeling_t5.py:253:0 %/final_layer_norm/ReduceMean_output_0 : Float(*, *, 1, strides=[16, 1, 1], requires_grad=0, device=cpu) = onnx::ReduceMean[axes=[-1], keepdims=1, onnx_name="/final_layer_norm/ReduceMean"](%/final_layer_norm/Pow_output_0), scope: transformers.models.t5.modeling_t5.T5Stack::/transformers.models.t5.modeling_t5.T5LayerNorm::final_layer_norm # /apps/bin/miniconda3/envs/on-device-llm/lib/python3.10/site-packages/transformers/models/t5/modeling_t5.py:253:0 %/final_layer_norm/Constant_1_output_0 : Float(requires_grad=0, device=cpu) = onnx::Constant[value={1e-06}, onnx_name="/final_layer_norm/Constant_1"](), scope: transformers.models.t5.modeling_t5.T5Stack::/transformers.models.t5.modeling_t5.T5LayerNorm::final_layer_norm # /apps/bin/miniconda3/envs/on-device-llm/lib/python3.10/site-packages/transformers/models/t5/modeling_t5.py:254:0 %/final_layer_norm/Add_output_0 : Float(*, *, 1, strides=[16, 1, 1], requires_grad=0, device=cpu) = onnx::Add[onnx_name="/final_layer_norm/Add"](%/final_layer_norm/ReduceMean_output_0, %/final_layer_norm/Constant_1_output_0), scope: transformers.models.t5.modeling_t5.T5Stack::/transformers.models.t5.modeling_t5.T5LayerNorm::final_layer_norm # /apps/bin/miniconda3/envs/on-device-llm/lib/python3.10/site-packages/transformers/models/t5/modeling_t5.py:254:0 %/final_layer_norm/Sqrt_output_0 : Float(*, *, 1, device=cpu) = onnx::Sqrt[onnx_name="/final_layer_norm/Sqrt"](%/final_layer_norm/Add_output_0), scope: transformers.models.t5.modeling_t5.T5Stack::/transformers.models.t5.modeling_t5.T5LayerNorm::final_layer_norm # /apps/bin/miniconda3/envs/on-device-llm/lib/python3.10/site-packages/transformers/models/t5/modeling_t5.py:254:0 %/final_layer_norm/Constant_2_output_0 : Float(requires_grad=0, device=cpu) = onnx::Constant[value={1}, onnx_name="/final_layer_norm/Constant_2"](), scope: transformers.models.t5.modeling_t5.T5Stack::/transformers.models.t5.modeling_t5.T5LayerNorm::final_layer_norm # /apps/bin/miniconda3/envs/on-device-llm/lib/python3.10/site-packages/transformers/models/t5/modeling_t5.py:254:0 %/final_layer_norm/Div_output_0 : Float(*, *, 1, strides=[16, 1, 1], requires_grad=0, device=cpu) = onnx::Div[onnx_name="/final_layer_norm/Div"](%/final_layer_norm/Constant_2_output_0, %/final_layer_norm/Sqrt_output_0), scope: transformers.models.t5.modeling_t5.T5Stack::/transformers.models.t5.modeling_t5.T5LayerNorm::final_layer_norm # /apps/bin/miniconda3/envs/on-device-llm/lib/python3.10/site-packages/transformers/models/t5/modeling_t5.py:254:0 %/final_layer_norm/Mul_output_0 : Float(*, *, 768, strides=[12288, 768, 1], requires_grad=0, device=cpu) = onnx::Mul[onnx_name="/final_layer_norm/Mul"](%/final_layer_norm/Cast_output_0, %/final_layer_norm/Div_output_0), scope: transformers.models.t5.modeling_t5.T5Stack::/transformers.models.t5.modeling_t5.T5LayerNorm::final_layer_norm # /apps/bin/miniconda3/envs/on-device-llm/lib/python3.10/site-packages/transformers/models/t5/modeling_t5.py:254:0 %last_hidden_state : Float(*, *, 768, strides=[12288, 768, 1], requires_grad=0, device=cpu) = onnx::Mul[onnx_name="/final_layer_norm/Mul_1"](%final_layer_norm.weight, %/final_layer_norm/Mul_output_0), scope: transformers.models.t5.modeling_t5.T5Stack::/transformers.models.t5.modeling_t5.T5LayerNorm::final_layer_norm # /apps/bin/miniconda3/envs/on-device-llm/lib/python3.10/site-packages/transformers/models/t5/modeling_t5.py:260:0 return (%last_hidden_state) /apps/bin/miniconda3/envs/on-device-llm/lib/python3.10/site-packages/torch/onnx/utils.py:1636: UserWarning: The exported ONNX model failed ONNX shape inference.The model will not be executable by the ONNX Runtime.If this is unintended and you believe there is a bug,please report an issue at https://github.com/pytorch/pytorch/issues.Error reported by strict ONNX shape inference: [ShapeInferenceError] (op_type:Min, node name: /block.0/layer.0/SelfAttention/Min): data_0 typestr: T, has unsupported type: tensor(int64) (Triggered internally at ../torch/csrc/jit/serialization/export.cpp:1407.) _C._check_onnx_proto(proto) [HACK @ Victor Escorcia]. opset 9 is not supported 2023-06-08. By trial & error optset=12 works ============= Diagnostic Run torch.onnx.export version 2.0.1+cu117 ============= verbose: False, log level: Level.ERROR ======================= 0 NONE 0 NOTE 0 WARNING 0 ERROR ======================== Traceback (most recent call last): File "/apps/bin/miniconda3/envs/on-device-llm/bin/optimum-cli", line 8, in sys.exit(main()) File "/apps/bin/miniconda3/envs/on-device-llm/lib/python3.10/site-packages/optimum/commands/optimum_cli.py", line 163, in main service.run() File "/apps/bin/miniconda3/envs/on-device-llm/lib/python3.10/site-packages/optimum/commands/export/onnx.py", line 208, in run main_export( File "/apps/bin/miniconda3/envs/on-device-llm/lib/python3.10/site-packages/optimum/exporters/onnx/__main__.py", line 303, in main_export _, onnx_outputs = export_models( File "/apps/bin/miniconda3/envs/on-device-llm/lib/python3.10/site-packages/optimum/exporters/onnx/convert.py", line 614, in export_models export( File "/apps/bin/miniconda3/envs/on-device-llm/lib/python3.10/site-packages/optimum/exporters/onnx/convert.py", line 719, in export config.fix_dynamic_axes(output, device=device, input_shapes=input_shapes, dtype=dtype) File "/apps/bin/miniconda3/envs/on-device-llm/lib/python3.10/site-packages/optimum/exporters/onnx/base.py", line 255, in fix_dynamic_axes session = InferenceSession(model_path.as_posix(), providers=providers, sess_options=session_options) File "/apps/bin/miniconda3/envs/on-device-llm/lib/python3.10/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 383, in __init__ self._create_inference_session(providers, provider_options, disabled_optimizers) File "/apps/bin/miniconda3/envs/on-device-llm/lib/python3.10/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 424, in _create_inference_session sess = C.InferenceSession(session_options, self._model_path, True, self._read_config_from_model) onnxruntime.capi.onnxruntime_pybind11_state.InvalidGraph: [ONNXRuntimeError] : 10 : INVALID_GRAPH : Load model from checkpoints/t5-base_onnx/encoder_model.onnx failed:This is an invalid model. Type Error: Type 'tensor(int64)' of input parameter (/block.0/layer.0/SelfAttention/Add_1_output_0) of operator (Min) in node (/block.0/layer.0/SelfAttention/Min) is invalid. ```
michaelbenayoun commented 1 year ago

Hi @escorciav ,

To make this happen you will need to:

The PR in Transformers is unlikely to be merged if it involves too many changes in the modeling code.

One solution for you would be to create your own OnnxConfig as you suggested. This way you can export whatever you want without needing anyone to validate anything.

escorciav commented 1 year ago

Thanks a lot for the pointers!

Everything makes sense. I might keep reporting progress here for my own & everyone's benefit if that's OK.

escorciav commented 1 year ago

I have been hacking optimum onnx exporter & it seems that it's possible to export as ONNX with --opset 9.

I managed to export T5-base to onnx & convert those onto dlc (format used by SNPE). I used snpe-2.10.0.4541.

If needed, the tricks are:

@michaelbenayoun @fxmarty I feel that I could write the dynamic_axes biz as a custom OnnxConfig, but unclear how to do it. Any hit is highly appreciated :). Unfortunatly, my company is blacklisted of HF server, and we can't access the documentation or any HF webpage since Monday :sweat:

escorciav commented 1 year ago

FWII I managed to export a HF:transformers:LlamaModel to onnx with opset version 9.

michaelbenayoun commented 1 year ago
andyxzhu commented 12 months ago

Hi @escorciav, do you mind sharing how you were able to export the T5/Llama models to opset 9? Even after modifying convert.py, I'm still getting the error when I try exporting with --opset 9.

escorciav commented 11 months ago

Hi @andyxzhu , I'm on holiday w/o access to the box machine of my employer.

  1. Where are you stuck?

  2. Do you have a minimum understanding of Python, Pytorch & onnx?/ If so, take it easy and carefully read this, this & this messages.

    Hope you can do whatever you're aiming to do 😃

andyxzhu commented 11 months ago

Hi @escorciav, thanks for the reply! And sorry to bother you on your holiday!

I'm able to export to ONNX--the issue I'm facing is with exporting that to SNPE.

To convert, I'm trying to use snpe-onnx-to-dlc -i decoder_model.onnx, but I get an error: Node SimplifiedLayerNormalization: 'No translation registered for op type onnx_simplifiedlayernormalization'

Did you encounter this? And if so, how did you work around it? I believe Qualcomm doesn't currently support that layer. Thanks!

escorciav commented 11 months ago

FWIU from my previous messages 😊😅, I didn't. It's possible that I only exported the encoder to DLC. BTW, I stopped using T5 due to unrelated issues to this thread.

Are you using the latest version of SNPE? I suggest using the latest as well as the compatible version of Pytorch, onnx, onnx-simplifier, etc. I have the hunch that issues get solved over time :)

escorciav commented 11 months ago

@andyxzhu apparently, I did get a dlc for the decoder. Thus, let's assume that I didn't get such an error. Happy to share the DLC for you to reverse engineer differences with respect to your model.

ATM, I'm not using SNPE. Thus, I won't be able to guide you further. FWII, I'm using Qualcomm QNN

$ tree checkpoints/t5-base_onnx
checkpoints/t5-base_onnx
├── config.json
├── decoder_model.dlc
├── decoder_model.onnx
├── decoder_with_past_model.onnx
├── encoder_model.dlc
├── encoder_model.onnx
├── generation_config.json
├── log.txt
├── ort_config.json
├── special_tokens_map.json
├── spiece.model
├── tokenizer_config.json
└── tokenizer.json
andyxzhu commented 11 months ago

Hi @escorciav, I've gotten T5 successfully exported to ONNX! In case someone else wants to do so as well, here are the additional steps I followed:

However, I'm still facing some issues with exporting Llama. You mentioned being able to do this here; could you share some more details?

Thanks!