xenova / transformers.js

State-of-the-art Machine Learning for the web. Run 🤗 Transformers directly in your browser, with no need for a server!
https://huggingface.co/docs/transformers.js
Apache License 2.0
9.71k stars 571 forks source link

Add sequence post processor #771

Closed xenova closed 1 month ago

xenova commented 1 month ago

Adds support for updated Llama 3 tokenizer.

Closes #739

Example JavaScript code:

import { AutoTokenizer } from "@xenova/transformers";
const tokenizer = await AutoTokenizer.from_pretrained("Xenova/llama3-tokenizer-new");

console.log(tokenizer.encode('hello world')); // [128000, 15339, 1917]
console.log(tokenizer.encode('hello', 'world')); // [128000, 15339, 128000, 14957]

console.log(tokenizer('hello world', { return_tensor: false })); // { input_ids: [ 128000, 15339, 1917 ], attention_mask: [ 1, 1, 1 ] }
console.log(tokenizer('hello', { text_pair: 'world', return_tensor: false })); // { input_ids: [128000, 15339, 128000, 14957], attention_mask: [1, 1, 1, 1] }

console.log(tokenizer('hello world', { return_token_type_ids: true, return_tensor: false })); // { input_ids: [128000, 15339, 1917], attention_mask: [1, 1, 1], token_type_ids: [0, 0, 0] }
console.log(tokenizer('hello', { text_pair: 'world', return_token_type_ids: true, return_tensor: false })); // { input_ids: [128000, 15339, 128000, 14957], attention_mask: [1, 1, 1, 1], token_type_ids: [0, 0, 1, 1] }

Equivalent Python code:

from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained('Xenova/llama-3-tokenizer-new')

print(tokenizer.encode('hello world')) # [128000, 15339, 1917]
print(tokenizer.encode('hello', 'world')) # [128000, 15339, 128000, 14957]

print(tokenizer('hello world')) # {'input_ids': [128000, 15339, 1917], 'attention_mask': [1, 1, 1]}
print(tokenizer('hello', 'world')) # {'input_ids': [128000, 15339, 128000, 14957], 'attention_mask': [1, 1, 1, 1]}

print(tokenizer('hello world', return_token_type_ids=True)) # {'input_ids': [128000, 15339, 1917], 'token_type_ids': [0, 0, 0], 'attention_mask': [1, 1, 1]}
print(tokenizer('hello', 'world', return_token_type_ids=True)) # {'input_ids': [128000, 15339, 128000, 14957], 'token_type_ids': [0, 0, 1, 1], 'attention_mask': [1, 1, 1, 1]}
HuggingFaceDocBuilderDev commented 1 month ago

The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.