bigscience-workshop / data_tooling

Tools for managing datasets for governance and training.
Apache License 2.0
74 stars 48 forks source link

Reason for not applying remove_non_prining_characters normalization #416

Open JoeyOhman opened 2 years ago

JoeyOhman commented 2 years ago

Hi,

We are much inspired by this great work and are in the process of cleaning our data. However, if we understand correctly, the remove_non_prining_characters normalization step is not used for the final cleaning. Do you have any thoughts on why this should not be used?

https://github.com/bigscience-workshop/data_tooling/blob/e28064ec7fb38af5143cafc896e9423a8b12392d/ac_dc/normalization.py#L5

There you have this:

non_printing_characters_re = re.compile(
    f"[{''.join(map(chr, list(range(0,32)) + list(range(127,160))))}]"
)

Which we modified, to keep newlines (\n) and tabs (\t), and to also remove soft-hyphens, non-breaking spaces, and zero-width space:

additional_chars_to_remove = [160, 173, 8203]
non_printing_characters_re = re.compile(
    f"[{''.join(map(chr, list(range(0,9)) + list(range(11, 32)) + list(range(127,160)) + additional_chars_to_remove))}]"
)

There could of course be more characters that one may want to remove.

To be clear, I am writing this here for two reasons:

  1. To get your feedback. Do you think this is a good idea to use for the final data cleaning?
  2. If so, this could be incorporated into this repository to help other people that might be thinking about this.

Thanks for your amazing contributions!

HugoLaurencon commented 2 years ago

Hi, thank you for your comment!

The remove_non_printing_characters function was not used during the normalization of the documents: https://github.com/bigscience-workshop/data_tooling/blob/e28064ec7fb38af5143cafc896e9423a8b12392d/ac_dc/filtering.py#L357

However, it was used just before the tokenization step: https://github.com/bigscience-workshop/data_tooling/blob/e28064ec7fb38af5143cafc896e9423a8b12392d/ac_dc/filtering.py#L213 and https://github.com/bigscience-workshop/data_tooling/blob/e28064ec7fb38af5143cafc896e9423a8b12392d/ac_dc/filtering.py#L688

Because we trained our tokenizers and KenLM models (https://huggingface.co/edugp/kenlm/tree/main/wikipedia) on data after removing these non-printing characters, to be sure the new data we pass to the tokenizer is the same form as the data it was trained on, we added this function as it was.

This is the main reason why this function is present in the code. If we didn't use it for the normalization of the documents, it was probably because there was \n or \t in the list as you mentioned, but it would make sense to use this function without these characters for the normalization of the documents (but not for before the tokenization, because the tokenizer did not see any \n or \t during its training).

I don't really know why \n or \t are in the list, it's mostly a code from Facebook CCNet which was used for the training of the tokenizers and KenLM models. I think the only thing we modified from them was not converting the characters to lower case. @edugp did this part.

So if you want to use the same tokenizers or KenLM models as us, you should check the parameters of the normalization applied before and use the same ones.

Don't hesitate if you have more questions!