arquivo / pwa-technologies

Arquivo.pt main goal is the preservation and access of web contents that are no longer available online. During the developing of the PWA IR (information retrieval) system we faced limitations in searching speed, quality of results, scalability and usability. To cope with this, we modified the archive-access project (http://archive-access.sourceforge.net/) to support our web archive IR requirements. Nutchwax, Nutch and Wayback’s code were adapted to meet the requirements. Several optimizations were added, such as simplifications in the way document versions are searched and several bottlenecks were resolved. The PWA search engine is a public service at http://archive.pt and a research platform for web archiving. As it predecessor Nutch, it runs over Hadoop clusters for distributed computing following the map-reduce paradigm. Its major features include fast full-text search, URL search, phrase search, faceted search (date, format, site), and sorting by relevance and date. The PWA search engine is highly scalable and its architecture is flexible enough to enable the deployment of different configurations to respond to the different needs. Currently, it serves an archive collection searchable by full-text with 180 million documents ranging between 1996 and 2010.
http://www.arquivo.pt
GNU General Public License v3.0
38 stars 7 forks source link

Some snippets are displaying wrongly encoded messages #1269

Open VascoRatoFCCN opened 2 years ago

VascoRatoFCCN commented 2 years ago

Sometimes snippets display accentuated characters wrongly. For example:

https://arquivo.pt/textsearch?q=%22Inaugurado%20na%20Crimeia%20monumento%20aos%20militares%20russos%20que%20anexaram%22

On the snippet there's:

And it should be:

VascoRatoFCCN commented 2 years ago

A good example of this:

https://arquivo.pt/page/search?q=soccer&l=pt

rncampos commented 1 year ago

Vasco thanks for opening this issue. Please have a look at this package (https://pypi.org/project/clean-text/) which I found only after raising this question. They have a fix_unicode param which works very well.

from cleantext import clean

clean(text, fix_unicode=True, # fix various unicode errors to_ascii=False, # transliterate to closest ASCII representation lower=False, # lowercase text )