Closed fosterjen closed 4 years ago
Findings so far:
The .vert
file does not have the utf-8 errors of the nci_cleaned
file. The .vert
file covers a wider range of characters from the "Latin" block and "μ" (U+03BC
). The cleaned version has only 19 million words. The Irish side of the NCI is supposed to have 30.2 million words.
I found a missing <s>
tag. Our new extractor script should use any of <s>
, <p>
, <doc>
and <file>
(and the respective closing tags) as trigger for a sentence boundary.
Looking at 3 examples of English sentences in random locations, they seem to unexpectedly occur in bursts after Irish sentences in the same document. Maybe the corpus is a snapshot of ongoing translation with incomplete parts defaulting to the source language or English. If we can confirm this, this information can be used in language classification: When the classifier is not highly confident it should go with the class of the neighbouring sentences.
Some of the <doc>
tags have a title
attribute that has Irish text not part of the document itself. We could add this text as a separate sentence before the first sentence to get even more data. Same could be done with the author
attribute whenever the pubdate
field is not empty and medium
one of "book" and "newspaper".
Glue tags <g/>
indicating that there was no space between the neighbouring tokens are not used.
No occurrences of <
or >
outside tags.
Backslash is not used as an escape symbol, except maybe in the 7 occurrences of \x\x13
.
Number of <p>
equals number of <s>
, i.e. <p>
are useless here.
Some </p>
and </s>
are missing.
There are empty sentences.
Looking at the first 100 lines, it seems that all-caps headings and the first sentence of a section are not separated. However, re-doing the sentence splitting without the extra signals from markup in the original documents probably would produce an overall worse segmentation. For BERT this shouldn't matter as we learned from re-reading the bert and roberta papers over the last weeks but we need to keep this in mind for other work, e.g. using this data for semi-supervised training of dependency parsers with tri-training.
The file has no unicode utf-8 errors. It's strange then that NCI_cleaned has such errors. Since other encodings such as ISO 8859-* would be likely to produce a byte sequence that is not valid utf-8 this means that the .vert file is utf-8 encoded with very high likelihood, meaning that there was no reason to attempt conversion from one encoding to another and no opportunity to cause damage to the encoding. The fact that all 34 utf-8 errors in NCI_cleaned are at the start of a line each and use the byte '\xa4' also rules out random bit rot, e.g. on a low quality memory stick. ANyway, good news that the errors are not here.
There is a section where "&" is encoded as three tokens &
, #38
and ;
, i.e. on three lines. (HTML entity &
is &
in most modern character encodings.) Also, I found &
, gt
, ;
split over 3 lines.
There are plenty of "
tokens.
The unicode combining diaeresis character occurs 18 times. When slicing and recombining character sequences, care must be taken not to separate it from its preceding character, or at least not let it end up at the start of a token, not to fail strict unicode encoding checks.
Otherwise, the character table looks fine. There is indication for a small amount of foreign material but this may just be names. The fraction slash U+2044
is used only 17 times. There are no fractions like 1/2 as a single character.
doc id="itgm0022"
, doc id="icgm1042"
and doc id="iwx00055"
have unescaped &
in the value of attributes. XML parser not happy. Implemented workaround in commit a5a27e2 line 52.
Update (with help from Teresa and Lauren):
& #38 ; #38 ; #38 ;
, see print_sentence()
in https://github.com/jbrry/Irish-BERT/blob/master/scripts/extract_text_from_nci_vert.pyMassa-chusetts
. Probably a problem with conversion from PDF.<s>
elements is huge and spans many sentences. The longest element has 65094 tokens. The 100th longest has 5153 tokens.'G idhlig
.T UA R A SC Á I L B H L I A N TÚ I L A N O M B UD S MA N 1 9 9 7
.Created separate issues for all issues mentioned above.
Some characters in the NCI are not properly encoded. This affects characters in otherwise ok sentences, or whole blocks of text.