Extend the colrev.acm_digital_library package by implementing search functionality for API searches. This involves scraping search results from the ACM Digital Library, and processing exported results.
Implementation Notes
Use the BeautifulSoup library to develop a scraper that navigates the ACM digital library search interface, fetching search results along with full-text links (if available). The scraper should be capable of running a search based on a given query, and export the search results. This may require navigating pages of search results.
Documentation should guide users on how to perform searches and load search results.
Implement unit tests simulating the retrieval process with a simple example. The colrev.crossref package provides an example for unit tests.
The packages docs page explains the steps of developing CoLRev packages.
The search-feed offers functionality for storing records.
The user initializes a CoLRev project using colrev init.
He/she conducts a search operation with colrev search -a colrev.acm_digital_library, specifying search parameters relevant to their topic.
The package retrieves records based on the parameters and saves them in the data/search directory of the project.
When users run colrev load, the records from the search directory are added to the data/records.bib file, which integrates all search results in the project.
When users run colrev search again, records are retrieved from the API, updating the existing records.
Description
Extend the colrev.acm_digital_library package by implementing search functionality for API searches. This involves scraping search results from the ACM Digital Library, and processing exported results.
Implementation Notes
User Story
colrev init
.colrev search -a colrev.acm_digital_library
, specifying search parameters relevant to their topic.data/search
directory of the project.colrev load
, the records from the search directory are added to thedata/records.bib
file, which integrates all search results in the project.colrev search
again, records are retrieved from the API, updating the existing records.Useful Links
Expected Effort
2 months, 3-4 people.