QwenLM / Qwen

The official repo of Qwen (通义千问) chat & pretrained large language model proposed by Alibaba Cloud.
Apache License 2.0
13.59k stars 1.11k forks source link

[BUG] 你好,请问qwen的tokenizer都是slow吗,有没有fast的tokenizer? #1006

Closed berooo closed 8 months ago

berooo commented 8 months ago

是否已有关于该错误的issue或讨论? | Is there an existing issue / discussion for this?

该问题是否在FAQ中有解答? | Is there an existing answer for this in FAQ?

当前行为 | Current Behavior

No response

期望行为 | Expected Behavior

No response

复现方法 | Steps To Reproduce

No response

运行环境 | Environment

- OS:
- Python:
- Transformers:
- PyTorch:
- CUDA (`python -c 'import torch; print(torch.version.cuda)'`):

备注 | Anything else?

No response

jklj077 commented 8 months ago

transformers的slow和fast可以理解成,slow是huggingface transformers的实现,fast是huggingface tokenizers的实现。Qwen目前是transformers调用tiktoken,tiktoken的速度高于tokenizers("fast")。

想用原生slow/fast的话,文件在这里 https://huggingface.co/Qwen/Qwen-tokenizer