internetarchive / openlibrary

One webpage for every book ever published!
https://openlibrary.org
GNU Affero General Public License v3.0
5.26k stars 1.4k forks source link

Bulk upload delegation request #9726

Open avidseeker opened 3 months ago

avidseeker commented 3 months ago

I have a repository of +12K Arabic books metadata: https://github.com/avidseeker/arabooks

If there is a bot to mass-upload them that would be a great addition to OpenLibrary. It currently lacks a lot of Arabic books coverage. (There isn't even an Arabic translation for OpenLibrary: https://github.com/internetarchive/openlibrary/pull/9673)

Thanks in advance.


Edit: To complete this issue one would need to parse the TSV files found at https://github.com/avidseeker/arabooks and create JSONL files that look similar to this:

{"identifiers": {"open_textbook_library": ["1581"]}, "source_records": ["open_textbook_library:1581"], "title": "Legal Fundamentals of Healthcare Law", "languages": ["eng"], "subjects": ["Medicine", "Law"], "publishers": ["University of West Florida Pressbooks"], "publish_date": "2024", "authors": [{"name": "Tiffany Jackman"}], "lc_classifications": ["RA440", "KF385.A4"]}
{"identifiers": {"open_textbook_library": ["1580"]}, "source_records": ["open_textbook_library:1580"], "title": "Introduction to Literature: Fairy Tales, Folk Tales, and How They Shape Us", "languages": ["eng"], "subjects": ["Humanities", "Literature, Rhetoric, and Poetry"], "publishers": ["University of West Florida Pressbooks"], "publish_date": "2023", "authors": [{"name": "Judy Young"}], "lc_classifications": ["PE1408"]}

The minimum required fields are: title, authors, publish_date, source_records, and publish_date. The part of the source_records could come from the name of the source, and an identifier, such as for loal-en.tsv from the Library of Arabic Literature, it might be "source_records": ["loal:9781479834129"] for the first item in the list.

Here are the publishers of the TSV files:

  1. awu-dam.tsv: Arabi Writers Union
  2. lisanarb.tsv: contains a pub entry
  3. loal-en.tsv and loal-ar.tsv: Library of Arabic Literature
  4. shamela.tsv: contains a publisher entry. Dates need to be merged from shamela-dates.tsv matching same title entry.
  5. waqfeya.tsv: set as "publishers": ["????"], since publishers need to be known on one by one basis.

Specifically, the values taken from the TSV and converted into JSONL would need to follow this schema. A script to do this for one line would look similar to this, but would probably use Python's csv module to read the TSV file, and then call json.dumps(line) on each line, after the data is in format specified in the import schema, and then it would be written to a JSONL file.

The output JSONL file could be tested using the endpoint from #8122, though you'd probably want to test with only a few records at a time rather than the whole file.

scottbarnes commented 3 months ago

@avidseeker, it would be great to increase the Arabic book coverage. There is an import API for sufficiently privileged patrons, which is documented here: https://github.com/internetarchive/openlibrary/wiki/Developer's-Guide-to-Data-Importing.

More specifically, there's a batch import endpoint (for which I just added the initial documentation), which would allow one to create a batch of records as JSONL for importing by a staff member: https://github.com/internetarchive/openlibrary/wiki/Developer's-Guide-to-Data-Importing#batch-importing-jsonl.

Currently this requires the field title, authors, publish_date, publishers, and source_records. I skimmed some of the records you have and notice the publisher tends to be missing, but perhaps that requirement can be relaxed for this import, though that's not a call I can make unilaterally. Hopefully we can get an answer to this sometime on Monday, Pacific time.

Also, what's the source of the records? That could help answer what the source_records field should be. It looks as if some may also have a page count? The full record schema can be found here: https://github.com/internetarchive/openlibrary-client/tree/master/olclient/schemata.

avidseeker commented 3 months ago

Thank you, that JSONL schema would definitely be helpful.

As for data sources, I updated the README of the repo to include their status. I updated shamela source, which is the biggest collection, to have the requested fields. I also updated LisanArab Library with URLs to book cover images . The data sources listed under completely-imported are ready to be used.

hornc commented 3 months ago

@avidseeker do any of the original sources provide their bibliographic data in library MARC format? I had a brief look and could not find any.

avidseeker commented 3 months ago

No. These libraries are very fragmented individual efforts, and many of gradually disappear, like Waqfeya.net, it has significantly less entries from just 2 years ago.

cdrini commented 3 months ago

(And to add to @scottbarnes ' answer, you basically need to coerce each book record into this format: https://github.com/internetarchive/openlibrary-client/blob/master/olclient/schemata/import.schema.json , and then save the results in a jsonl file :+1: )

avidseeker commented 3 months ago

I understand, I'll take a look on it, but I might not have the time for it.

I opened this issue in hopes of finding someone experienced with scripting and parsing that has done a bulk import before. They might already have conversion scripts to make data files comply with jsonl schema.

scottbarnes commented 3 months ago

@avidseeker, unless you suddenly have more free time and are itching to work on this, with your permission I will edit your initial comment in this issue to add something along the lines of the following, in the hope it makes it more attractive to a contributor who might wish to work on it:

To complete this issue one would need to parse the TSV files found at https://github.com/avidseeker/arabooks and create JSONL files that look similar to this:

{"identifiers": {"open_textbook_library": ["1581"]}, "source_records": ["open_textbook_library:1581"], "title": "Legal Fundamentals of Healthcare Law", "languages": ["eng"], "subjects": ["Medicine", "Law"], "publishers": ["University of West Florida Pressbooks"], "publish_date": "2024", "authors": [{"name": "Tiffany Jackman"}], "lc_classifications": ["RA440", "KF385.A4"]}
{"identifiers": {"open_textbook_library": ["1580"]}, "source_records": ["open_textbook_library:1580"], "title": "Introduction to Literature: Fairy Tales, Folk Tales, and How They Shape Us", "languages": ["eng"], "subjects": ["Humanities", "Literature, Rhetoric, and Poetry"], "publishers": ["University of West Florida Pressbooks"], "publish_date": "2023", "authors": [{"name": "Judy Young"}], "lc_classifications": ["PE1408"]}

The minimum required fields are: title, authors, publish_date, source_records, and publish_date. The part of the source_records could come from the name of the source, and an identifier, such as for loal-en.tsv from the Library of Arabic Literature, it might be "source_records": ["loal:9781479834129"] for the first item in the list. Many or perhaps all of the items in the TSVs won't have publishers listed, so to get around the import schema requirements for that schema, for now just have the value set to "publishers": ["????"] for now and we can cross that bridge later.

Specifically, they the values taken from the TSV and converted into JSONL would need to follow this schema. A script to do this for one line would look similar to this, but would probably use Python's csv module to read the TSV file, and then call json.dumps(line) on each line, after the data is in format specified in the import schema, and then it would be written to a JSONL file.

The output JSONL file could be tested using the endpoint from #8122, though you'd probably want to test with only a few records at a time rather than the whole file.

I think tasks like this are fun and I'm happy to help anyone interested in it.

avidseeker commented 3 months ago

Done. Thank you for breaking it down into steps. I added more clarification for the publishers' part.

scottbarnes commented 3 months ago

Great, thanks, @avidseeker!