microsoft / onnxruntime

ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator
https://onnxruntime.ai
MIT License
14.53k stars 2.91k forks source link

[Documentation Request] #19819

Open Jackycheng0808 opened 7 months ago

Jackycheng0808 commented 7 months ago

@onnx_op(op_type="GPT2Tokenizer", inputs=[PyCustomOpDef.dt_string], outputs=[PyCustomOpDef.dt_int64, PyCustomOpDef.dt_int64], attrs={"padding_length": PyCustomOpDef.dt_int64}) def bpe_tokenizer(s, **kwargs): padding_length = kwargs["padding_length"] input_ids, attention_mask = cls.tokenizer.tokenizer_sentence([s[0]], padding_length) return input_ids, attention_mask

op_type should be same as function name, change to @onnx_op(op_type="bpe_tokenizer", ...


Document Details

github-actions[bot] commented 6 months ago

This issue has been automatically marked as stale due to inactivity and will be closed in 30 days if no further activity occurs. If further support is needed, please provide an update and/or more details.