Open yash-srivastava19 opened 3 weeks ago
After tinkering with the tokenize_and_concatenate
function a little bit, I was able to work around(for my case), after removing the chunking part from the code. The number of batches for small datasets is 0, and that creates a problem. Here's the refactored code. If possible, can you tell whether this approach is ok?
...
def tokenize_and_concatenate(
dataset,
tokenizer,
streaming: bool = False,
max_length: int = 1024,
column_name: str = "text",
add_bos_token: bool = True,
num_proc: int = 10,
):
"""Helper function to tokenizer and concatenate a dataset of text. This converts the text to tokens, concatenates them (separated by EOS tokens) and then reshapes them into a 2D array of shape (____, sequence_length), dropping the last batch. Tokenizers are much faster if parallelised, so we chop the string into 20, feed it into the tokenizer, in parallel with padding, then remove padding at the end.
This tokenization is useful for training language models, as it allows us to efficiently train on a large corpus of text of varying lengths (without, eg, a lot of truncation or padding). Further, for models with absolute positional encodings, this avoids privileging early tokens (eg, news articles often begin with CNN, and models may learn to use early positional encodings to predict these)
Args:
dataset (Dataset): The dataset to tokenize, assumed to be a HuggingFace text dataset.
tokenizer (AutoTokenizer): The tokenizer. Assumed to have a bos_token_id and an eos_token_id.
streaming (bool, optional): Whether the dataset is being streamed. If True, avoids using parallelism. Defaults to False.
max_length (int, optional): The length of the context window of the sequence. Defaults to 1024.
column_name (str, optional): The name of the text column in the dataset. Defaults to 'text'.
add_bos_token (bool, optional): . Defaults to True.
Returns:
Dataset: Returns the tokenized dataset, as a dataset of tensors, with a single column called "tokens"
Note: There is a bug when inputting very small datasets (eg, <1 batch per process) where it just outputs nothing. I'm not super sure why
"""
dataset = keep_single_column(dataset, column_name)
if tokenizer.pad_token is None:
# We add a padding token, purely to implement the tokenizer. This will be removed before inputting tokens to the model, so we do not need to increment d_vocab in the model.
tokenizer.add_special_tokens({"pad_token": "<PAD>"})
# Define the length to chop things up into - leaving space for a bos_token if required
if add_bos_token:
seq_len = max_length - 1
else:
seq_len = max_length
def tokenize_function(examples):
text = examples[column_name]
# Concatenate it all into an enormous string, separated by eos_tokens
full_text = tokenizer.eos_token.join(text)
tokens = tokenizer(full_text, return_tensors="np", padding=True)["input_ids"].flatten() # instead of chunking, just do it for the full text.
# Drop padding tokens
tokens = tokens[tokens != tokenizer.pad_token_id]
num_tokens = len(tokens)
num_batches = num_tokens // (seq_len)
# Drop the final tokens if not enough to make a full sequence
tokens = tokens[: seq_len * num_batches] if num_batches else tokens
if add_bos_token:
if num_batches: # if num_batches are not zero, proceed the standard way
tokens = einops.rearrange(tokens, "(batch seq) -> batch seq", batch=num_batches, seq=seq_len)
prefix = np.full((num_batches, 1), tokenizer.bos_token_id)
tokens = np.concatenate([prefix, tokens], axis=1)
else:
tokens = np.array(tokens) # return the numpy array otherwise.
return {"tokens": tokens}
tokenized_dataset = dataset.map(
tokenize_function,
batched=True,
num_proc=(num_proc if not streaming else None),
remove_columns=[column_name],
)
tokenized_dataset.set_format(type="torch", columns=["tokens"])
return tokenized_dataset
Describe the bug
It was mentioned in the docstrings as well that the
tokenize_and_concatenate
function doesn't work properly with small datasets. I wanted to figure out is there a workaround that can be used.Code example The dataset I'm using is a small dataset, and sometimes contains only single word. Here is the minimal code that reproduces the error.
Here's what the error stack trace looks like :
This works perfectly well for the DATASET_1, but for DATASET_2, it breaks.
System Info Describe the characteristic of your environment:
transformer_lens
was installed: pipChecklist