Closed mushfiq closed 10 years ago
Tonight the docs will go through a huge overhaul, a ton of stuff is missing including what you mentioned above. Thanks.
The tutorial to do so is now up on this link: http://newspaper.readthedocs.org/en/latest/user_guide/advanced.html#adding-new-languages
The language I want to add is a non latin language, do you have idea when you are going to update the docs regarding the non latin language support? Thanks in advance.
Which language if you don't mind me asking? I'll update it when school clears up, maybe 1-2 weeks.
@codelucas language code is "BD".
Re-opened
Hi I am also interested in new non-latin languages. I like to ad a bunch of indien languages. Really appreciate your work!
And i'm interested very much in Bangla. Is there any way to have support or make supportable in Bangla. Sent using CloudMagic On Fri, Feb 14, 2014 at 2:28 AM, Wooobee notifications@github.com wrote:Hi I am also interested in new non-latin languages. I like to ad a bunch of indien languages. Really appreciate your work!
—Reply to this email directly or view it on GitHub.
Thanks for all of this enthusiasm to add more! Here is what I know so far (more will be added to this thread, and later to the docs):
First, please reference this file and read from the highlighted line all the way down to the end of the file.
https://github.com/codelucas/newspaper/blob/master/newspaper/text.py#L57
One aspect of our text extraction algorithm revolves around counting the number of stopwords present in a text. Stopwords are: some of the most common, short function words, such as the, is, at, which, and on in a language.
Reference this line to see it in action: https://github.com/codelucas/newspaper/blob/master/newspaper/extractors.py#L669
So for latin languages, it is pretty basic. We first provide a list of stopwords in stopwords-<language-code>.txt
form. We then take some input text and tokenize it into words by splitting the white space. After that we perform some bookkeeping and then proceed to count the number of stopwords present.
For non-latin languages, as you may have noticed in the code above, we need to tokenize the words in a different way, splitting by whitespace simply won't work for languages like Chinese or Arabic. For the Chinese language we are using a whole new open source library called jieba to split the text into words. For arabic we are using a special nltk tokenizer to do the same job.
So, to add full text extraction to a new (non-latin) language, we need:
1.) Push up a stopwords file in the format of stopwords-<2-char-language-code>.txt
in newspaper/resources/text/.
2.) Provide a way of splitting/tokenizing text in that foreign language into words. Here are some examples for Chinese, Arabic, English:
For latin languages:
1.) Push up a stopwords file in the format of stopwords-<2-char-language-code>.txt
in newspaper/resources/text/.
and we are done!
Guide on adding new languages: http://newspaper.readthedocs.org/en/latest/user_guide/advanced.html#adding-new-languages
Can you add a new section where describing how to add a new language support?