bigscience-workshop / data_tooling

Tools for managing datasets for governance and training.
Apache License 2.0
75 stars 49 forks source link

Crawling curated list of sites: BigScience catalog app URLs #298

Open yjernite opened 2 years ago

yjernite commented 2 years ago

We want to be able to obtain all web and media content associated with a specific list pre-identified domain names.

This issue tracks domain names identified in the BigScience Data Cataloging Event

The steps to follow are:

  1. filter the CommonCrawl (or another archive) for all WARC records with one of the given domain names
    • filtering all dumps form the last two years
  2. obtain overall metrics and metrics per domain name
    • page counts, content languages, content types, etc.
  3. upload all of the relevant WARC records for each domain name to a HF dataset in the BigScience Catalogue Data Organization
    • minimal filtering of WARC records to include human-readable pages AND pages that reference links to objects we want to download (e.g. PDFs)
    • Extract the HTML tags corresponding to all URLs in the WARC entries
    • optional: post-process the above list to identify outgoing links, extract their domain name, and content type
    • optional: run text extraction

In particular, the list of domain names mentioned in outgoing link may be used to obtain a "depth 1 pseudo-crawl" by running the same process again

yjernite commented 2 years ago

cc @sebastian-nagel

sebastian-nagel commented 2 years ago

self-assign