Open JoeyOhman opened 2 years ago
Hi, thank you for your comment!
The remove_non_printing_characters
function was not used during the normalization of the documents:
https://github.com/bigscience-workshop/data_tooling/blob/e28064ec7fb38af5143cafc896e9423a8b12392d/ac_dc/filtering.py#L357
However, it was used just before the tokenization step: https://github.com/bigscience-workshop/data_tooling/blob/e28064ec7fb38af5143cafc896e9423a8b12392d/ac_dc/filtering.py#L213 and https://github.com/bigscience-workshop/data_tooling/blob/e28064ec7fb38af5143cafc896e9423a8b12392d/ac_dc/filtering.py#L688
Because we trained our tokenizers and KenLM models (https://huggingface.co/edugp/kenlm/tree/main/wikipedia) on data after removing these non-printing characters, to be sure the new data we pass to the tokenizer is the same form as the data it was trained on, we added this function as it was.
This is the main reason why this function is present in the code. If we didn't use it for the normalization of the documents, it was probably because there was \n
or \t
in the list as you mentioned, but it would make sense to use this function without these characters for the normalization of the documents (but not for before the tokenization, because the tokenizer did not see any \n
or \t
during its training).
I don't really know why \n
or \t
are in the list, it's mostly a code from Facebook CCNet which was used for the training of the tokenizers and KenLM models. I think the only thing we modified from them was not converting the characters to lower case. @edugp did this part.
So if you want to use the same tokenizers or KenLM models as us, you should check the parameters of the normalization applied before and use the same ones.
Don't hesitate if you have more questions!
Hi,
We are much inspired by this great work and are in the process of cleaning our data. However, if we understand correctly, the
remove_non_prining_characters
normalization step is not used for the final cleaning. Do you have any thoughts on why this should not be used?https://github.com/bigscience-workshop/data_tooling/blob/e28064ec7fb38af5143cafc896e9423a8b12392d/ac_dc/normalization.py#L5
There you have this:
Which we modified, to keep newlines (
\n
) and tabs (\t
), and to also remove soft-hyphens, non-breaking spaces, and zero-width space:There could of course be more characters that one may want to remove.
To be clear, I am writing this here for two reasons:
Thanks for your amazing contributions!