dwyl / english-words

:memo: A text file containing 479k English words for all your dictionary/word-based projects e.g: auto-completion / autosuggestion
The Unlicense
10.54k stars 1.84k forks source link

Create words_alpha_clean.txt #108

Open Orivoir opened 3 years ago

Orivoir commented 3 years ago

Add file words_alpha_clean.txt that a copy of words_alpha.txt but the words that not exists in english has been removed. The sort has been effectuate with the API of wordsapi that allow the search of words in english, from a script i've call the API for each word, and during not exists word i've remove word from a file. You can find the doc API here. The exact filter of a word is based on frequency data of API

if(!!response.word && typeof response.frequency == "object") {

    if(response.frequency.perMillion >= 15) {
         // here word is not removed
    realWords.push(response.word);
     }
     // else word is removed
}

The documentation indicate this below text for frequency data:

This is the number of times the word is likely to appear in any English corpus, per million words.

jcnmsg commented 2 years ago

Nice work but from 350000+ lines only around 2500 survived? Seems like the parameters used have been a little too strict...

Orivoir commented 2 years ago

I used less strict filter with frequency data of same API for ~30 000 words results but i think again that some words not real english words. see 4971374b

jcnmsg commented 2 years ago

~30 000 would be closer to reality but it appears to have duplicated a bunch of words as well which were not duplicated on the original words_alpha.txt. See bedrock. bedroll, bedroom, bedspread, bedstead as examples...

Timokasse commented 2 years ago

Nice work but from 350000+ lines only around 2500 survived? Seems like the parameters used have been a little too strict...

The API is free for 2500 words per day. That is probably why....

jcnmsg commented 2 years ago

The API is free for 2500 words per day. That is probably why....

@Orivoir did get ~30 000 words just by using different parameters, so that was probably not it.

aploium commented 2 years ago

maybe it kills too much words. for example: blacklist is in it, but whitelist doesn't
sale not in, but sales is in

white lives matter, too [:joke:]

ghost commented 2 years ago

Hi all, I have run the words_alpha.txt through the "nltk" python library. Total words are 210693. This seems to be a bit better, but I have noticed there are still a few oddities in there (maybe things like common abbreviations remain, which aren't actual words). But overall I think this has cleaned out any non-english words.

words_alpha_clean.txt

silverwings15 commented 2 years ago

@SDidge appreciate the share!

jcnmsg commented 2 years ago

@SDidge At first glance I can't seem to find any non-english words on the file so I'd say this one is the cleanest file so far, nice work!

Timokasse commented 2 years ago

Hi all, I have run the words_alpha.txt through the "nltk" python library. Total words are 210693. This seems to be a bit better, but I have noticed there are still a few oddities in there (maybe things like common abbreviations remain, which aren't actual words). But overall I think this has cleaned out any non-english words.

words_alpha_clean.txt

@SDidge , what exactly did you use from the NLTK library to check the list of words?

ghost commented 2 years ago

@Timokasse , I just checked if the word existed in the "words" corpus

E.g.

from nltk.corpus import words

word for words_alpha if word in words

Something like this