arquivo / pwa-technologies

Arquivo.pt main goal is the preservation and access of web contents that are no longer available online. During the developing of the PWA IR (information retrieval) system we faced limitations in searching speed, quality of results, scalability and usability. To cope with this, we modified the archive-access project (http://archive-access.sourceforge.net/) to support our web archive IR requirements. Nutchwax, Nutch and Wayback’s code were adapted to meet the requirements. Several optimizations were added, such as simplifications in the way document versions are searched and several bottlenecks were resolved. The PWA search engine is a public service at http://archive.pt and a research platform for web archiving. As it predecessor Nutch, it runs over Hadoop clusters for distributed computing following the map-reduce paradigm. Its major features include fast full-text search, URL search, phrase search, faceted search (date, format, site), and sorting by relevance and date. The PWA search engine is highly scalable and its architecture is flexible enough to enable the deployment of different configurations to respond to the different needs. Currently, it serves an archive collection searchable by full-text with 180 million documents ranging between 1996 and 2010.
http://www.arquivo.pt
GNU General Public License v3.0
39 stars 7 forks source link

Results not visible #1161

Open vitgou opened 3 years ago

vitgou commented 3 years ago

What is the URL that originated the issue? https://arquivo.pt/page/search?query=%22ricardo+jorge+salvador+lopes%22&dateStart=01/01/1996&dateEnd=02/03/2021&pag=prev&start=10&hitsPerPage=10&hitsPerDup=2&dedupField=site&l=pt

What happened? The application reports 41 results from 1996 to 2021 and nothing is visble.

What should have happened? The 41 results should be visible.

Screenshots Screenshot from 2021-08-31 15-39-12

dcgomes commented 2 years ago

Investigate origin of the inconsistency. Check results received from API.

VascoRatoFCCN commented 2 years ago

This has to do with removed content due to a legal requests. The indexes still have references to the removed content, but the content itself is gone, so we get an empty search result. To fix this we'd have to re-index every collection that used to contain a removed content.

In my opinion, we can't justify allocating the computational resources required to fix an edge case like this.

VascoRatoFCCN commented 2 years ago

Simple fix will be to blacklist these URLs from the API results. We'll do it on the next milestone.

VascoRatoFCCN commented 1 year ago

will reevaluate after solr implementation for textsearch