jbrry / Irish-BERT

Repository to store helper scripts for creating an Irish BERT model.
Other
9 stars 0 forks source link

Decoding in text files (character reference entities) #32

Open alanagiasi opened 4 years ago

alanagiasi commented 4 years ago

Thanks to Joachim for pointing this issue out and providing the command line. There are decoding issues with some (approx 65,000) characters in plaintext files. Searching through text files for regex [&][#0-9a-zA-Z]+[;] yields the counts and matched strings listed below. (RegEx reminder [&][#0-9a-zA-Z]+[;] is any string beginning with & followed by one or more # or alphanumeric and ending in ; )

Note that some/many/all of these strings may be in files that are on the exclude list of Irish_Data/gdrive_filelist.csv so could potentially be ignored. Assuming the strings should be replaced by the correct character in the first instance, further investigation and action is required.

 39 �
 125 |
   1 é
   1 ú
   8 &
1854 [
1828 ]
 176 ‘
 174 ’
   1 “
   1 ”
   1 á
   2 &Dodgers;
   1 á
26840 &
15743 '
   4 &c;
  85 >
  59 <
  35  
18510 "

find Irish_Data -type f | fgrep -v .tmx | xargs grep -h -o -E "[&][#0-9a-zA-Z]+[;]" | sort | uniq -c

39 � 125 | 1 é 1 ú 8 & 1854 [ 1828 ] 176 ‘ 174 ’ 1 “ 1 ” 1 á 2 &Dodgers; 1 á 26840 & 15743 ' 4 &c; 85 > 59 < 35   18510 "

jowagner commented 4 years ago

Thanks Alan,

Excluding NCL_cleaned that we now replace with text extracted from Teresa's .vert file, the list gets a lot shorter:

$ find Irish_Data -type f | fgrep -v .tmx | fgrep -v NCI_cleaned | xargs -d'\n' grep -h -o -E "[&][#0-9a-zA-Z]+[;]" | sort | uniq -c
     85 &#124;
   1022 &#91;
   1013 &#93;
      2 &Dodgers;
  15793 &amp;
   6496 &apos;
      9 &gt;
     17 &lt;
   2106 &quot;

(I ran this in a snapshot taken in July before scripts and google docs documents were added.)

It is possible that some of these occurrences are correct, e.g. in technical material. However, it will be too much work to check each context. A global replace as part of the automatic pre-processing is the way forward.

jowagner commented 4 years ago

Just to mention that I did a good bit of searching to try to find out what &Dodgers; may be in September but couldn't find anything conclusive.

alanagiasi commented 4 years ago

Great. I'm inclined to agree that a global replace is the optimal solution. &Dodgers; and &c; deserved some attention as they don't appear to be valid Character Reference Entities (CRE).

TLDR; I had a look at &Dodgers; and it does not appear to be a valid CRE:

  1. It is not listed in w3.org charref table.
  2. It does not render in a browser i.e. >&Dodgers;< >&Dodgers;< whereas a valid CRE does render e.g. >&amp;< >&<

The 2 occurrences are in Paracrawl(v4) which is currently not included in training (it’s on the exclude list in ‘gdrive_filelist.csv’) but still worth handling it since we're aware of it.

I also had a look at &c since it was the only other string that didn't render in the browser. As with &Dodgers;, points 1 and 2 above apply here also. The 4 occurrences of &c are in a single file "train.txt" in 4 passages of text that are almost identical. The &c do not appear to have any meaning in the passages so they can be simply removed.

All other CREs listed above are valid and can be replaced with their equivalent character.

jowagner commented 4 years ago

Thanks for throwing another pair of eyes on it.

&c is always followed by . and &c. is an alternative way of writing "etc.". ("&" derives from a ligature of "e" and "t".)

Do you mean train.txt in Irish_Data > processed_ga_files_for_BERT_runs? That's excluded in gdrive_filelist.csv and seems to be a file derived from other files in Irish_Data and is from January, i.e. does not reflect various changes made since then.

Technical note: Replace "valid CRE" with "pre-defined CRE". Data providers may have had their own entities defined in custom DTD files. Such DTDs can be referenced in a document's <!DOCTYPE element instead of a standardised public DTD.

jbrry commented 4 years ago

Do you mean train.txt in Irish_Data > processed_ga_files_for_BERT_runs

This is the training file used for the first multilingual_bert run. It uses almost all of the files in Irish_Data (minus NCI and Paracrawl I believe) and I ran them through a similar script to text_processor.py. I added this file for reproducibility, e.g. to compare to our first run. It is not used in any subsequent runs due to the pipeline changing.

alanagiasi commented 4 years ago

Thanks for throwing another pair of eyes on it.

&c is always followed by . and &c. is an alternative way of writing "etc.". ("&" derives from a ligature of "e" and "t".)

Sorry I should have typed &c; I omitted the semicolon, but as you and James pointed out the file is no longer used so the four instances with the semicolons are moot. Thanks for the note on "pre-defined CRE" :) and the '&c' explanation which I won't forg& :D