arquivo / pwa-technologies

Arquivo.pt main goal is the preservation and access of web contents that are no longer available online. During the developing of the PWA IR (information retrieval) system we faced limitations in searching speed, quality of results, scalability and usability. To cope with this, we modified the archive-access project (http://archive-access.sourceforge.net/) to support our web archive IR requirements. Nutchwax, Nutch and Wayback’s code were adapted to meet the requirements. Several optimizations were added, such as simplifications in the way document versions are searched and several bottlenecks were resolved. The PWA search engine is a public service at http://archive.pt and a research platform for web archiving. As it predecessor Nutch, it runs over Hadoop clusters for distributed computing following the map-reduce paradigm. Its major features include fast full-text search, URL search, phrase search, faceted search (date, format, site), and sorting by relevance and date. The PWA search engine is highly scalable and its architecture is flexible enough to enable the deployment of different configurations to respond to the different needs. Currently, it serves an archive collection searchable by full-text with 180 million documents ranging between 1996 and 2010.
http://www.arquivo.pt
GNU General Public License v3.0
38 stars 7 forks source link

Tomcat is failing with OutOfMemoryError #1352

Closed franciscoesteveira closed 9 months ago

franciscoesteveira commented 9 months ago

In June 2023 it was detected an OutOfMemoryError in Tomcat, which hosts several apps.

When requesting the texsearch API with a String representing a URL, it allows the request to bypass some security measures to improve performance and prevent API abuse, such as limiting the returned items and limiting the number of query terms. This can be seen here: https://github.com/arquivo/pwa-technologies/blob/528f90382e165a354750fa9e791925766abba8c0/PwaArchive-access/projects/nutchwax/nutchwax-thirdparty/nutch/src/java/org/apache/nutch/searcher/NutchBean.java#L283

The system has been receiving some requests which match these constraints, such as: /textsearch?q=alteryx.cn&offset=0&maxItems=50000&siteSearch=&type=&collection=

This can be a problem since for every item returned, the API must perform two RPC calls to the query server where the item was fetched from. A single request with 50000 maxItems would trigger 50000*2 = 100000 RPC calls. It was observed that with a high volume of requests and returned items this can cause some RPC calls to leak memory.

1st RPC call: https://github.com/arquivo/page-search/blob/9a6b527320751ee942edf36b5e0b4b35eb2432fb/page-search-api/src/main/java/pt/arquivo/services/nutchwax/NutchWaxSearchService.java#L201 https://github.com/arquivo/pwa-technologies/blob/528f90382e165a354750fa9e791925766abba8c0/PwaArchive-access/projects/nutchwax/nutchwax-thirdparty/nutch/src/java/org/apache/nutch/searcher/DistributedSearch.java#L402

2nd RPC call: https://github.com/arquivo/page-search/blob/9a6b527320751ee942edf36b5e0b4b35eb2432fb/page-search-api/src/main/java/pt/arquivo/services/nutchwax/NutchWaxSearchService.java#L202 https://github.com/arquivo/pwa-technologies/blob/528f90382e165a354750fa9e791925766abba8c0/PwaArchive-access/projects/nutchwax/nutchwax-thirdparty/nutch/src/java/org/apache/nutch/searcher/DistributedSearch.java#L461

franciscoesteveira commented 9 months ago

To mitigate this problem the maxItems parameter is hard capped to 500 items for each textsearch API request as seen in this commit: https://github.com/arquivo/page-search/commit/9a6b527320751ee942edf36b5e0b4b35eb2432fb