MaRDI4NFDI / docker-importer

Import data from external data sources into the portal
https://mardi4nfdi.github.io/docker-importer
0 stars 0 forks source link

[Epic] Import additional data from zbMATH Open #3

Open aot29 opened 2 years ago

aot29 commented 2 years ago

Issue description: Additional data from the zbMath open API should be imported into the MaRDI-Portal. Related: https://github.com/MaRDI4NFDI/docker-importer/issues/6

Remarks:

TODO:

Acceptance-Criteria

Checklist for this issue:

Using Crossref data: "No sign-up is required to use the REST API, and the data can be treated as facts from members. The data is not subject to copyright, and you may use it for any purpose.

Crossref generally provides metadata without restriction; however, some abstracts contained in the metadata may be subject to copyright by publishers or authors." (https://www.crossref.org/documentation/retrieve-metadata/rest-api/)

aot29 commented 2 years ago

@physikerwelt according to the documentation and the examples, the OAI API should return JSON, but it only returns XML. That's OK for me, but is it the intended behavior?

physikerwelt commented 2 years ago

Yes, the response content type is incorrect. I'll make a pull request.

physikerwelt commented 2 years ago

The content type itself is correct. Only in the swagger API, the wrong content type is expected.

< HTTP/1.1 200 OK < Date: Mon, 17 Jan 2022 18:48:33 GMT < Server: Apache/2.4.38 (Debian) < Content-Length: 5810 < Vary: Accept-Encoding < Content-Type: text/xml; charset=utf-8

aot29 commented 2 years ago

@Hyper-Node suggested that we wouldn't necessarily need to import the data, if the data is already in a graph database in zbMath Open (I didn't find any documentation on how the backend database is implemented). If this is so, that would be the most elegant solution.

Otherwise, would we import preview data (title, author, doi, keywords etc., but no abstract etc.) for all publications in zbMath Open? Taking licensing limitations into account, that would be +- 3 million entries. If that is so, I would first prototype it like described in this issue, then build an import container and put it on the server to run by night. I estimate importing 3 million entries (probably in batches of 100) using quickstatements would take a couple of weeks.

@Hyper-Node @physikerwelt what do you think?

physikerwelt commented 2 years ago

I would say stay focussed. There is no graph database for zbMath Open. I would be extremely happy if we could develop a tool that is capable of importing individual zbMATH Open entries (or a batch of entries) on demand, without creating duplicates (but rather updates existing entries). If I interpret the ticket description correctly, this is what this ticket about. I am not sure if we need to import all zbMATH at this point in time. I would like to create a different ticket for that and keep the focus here on building the first version of the zbMATH Open -> MaRDI portal ingestion pipeline.

physikerwelt commented 2 years ago

@physikerwelt according to the documentation and the examples, the OAI API should return JSON, but it only returns XML. That's OK for me, but is it the intended behavior?

Fixed now, cf. https://oai.zbmath.org/ Screenshot 2022-01-20 at 14-53-41 zbMATH Open OAI-PMH API

aot29 commented 2 years ago

thanks. ~Next question: the ListSets endpoint always crashes with error 500. I can work around this using helper/filter so that's OK for me so far.~

aot29 commented 2 years ago

The ListSets endpoint crashes with certain parameters, e.g. curl -X 'GET' 'https://oai.zbmath.org/v1/helper/filter?filter=software:FORTRAN&metadataPrefix=oai_zb_preview' -H 'accept: text/xml' returns INTERNAL SERVER ERROR

while

curl -X 'GET' 'https://oai.zbmath.org/v1/helper/filter?filter=software:Gfan&metadataPrefix=oai_zb_preview' -H 'accept: text/xml' works fine

Since FORTRAN is probably much more popular than Gfan, is there some error related to the size of the result set?

(also mailing OAI support)

sedimentation-fault commented 2 years ago

Don't go into the trouble of downloading 4.2+ million records from zbmath - at least not yet. Two-thirds of all XML records are records whose title, authors and many other elements contain just the string

zbMATH Open Web Interface contents unavailable due to conflicting licenses.

Example:

Let's take the item with "DE number" 3224368 and form the OAI-PMH URL for the "GetRecord" endpoint:

https://oai.zbmath.org/v1/?verb=GetRecord&identifier=oai%3Azbmath.org%3A3224368&metadataPrefix=oai_zb_preview

Download this with your favorite web client to, say, 3224368.xml and inspect it - you'll see what I mean. This happens to 2 out of any 3 items that I try randomly.

Interestingly enough, trying the bibtex for the same item from

https://zbmath.org/bibtex/03224368.bib

will get you full information on exactly those fields where OAI-PMH encounters "conflicting licenses" - go figure.

I would understand if this would appear in XML elements that might contain some copyrightable information, but I fail to see how titles or author names fall into a category of items where any licensing restrictions might apply whatsoever...

physikerwelt commented 2 years ago

Indeed title and authors can not be exposed via the API due to license restrictions. The terms and conditions of zbMATH don't allow scaping the bibtex information. Thanks to @rank-zero for the background information.

I think independent of the restricted fields we should design the ingestion process in a way that it downloads the initial dataset once and fetches updates in fixed intervals. Here the oaipmh format comes in handy.

sedimentation-fault commented 2 years ago

"The terms and conditions of zbMATH don't allow scaping the bibtex information."

I thought zbMATH decided to become "open access" some time ago. Besides, if I present a title, does this give rise to a legal suspicion that I scraped a bibtex from somewhere? zbMATH does not have to reveal to anyone how it arrived at any given title or author name. Plus, the way you say it implies that it is zbMATH itself that imposes restrictions to...itself? - I don't understand all this.

Anyway, to stay on topic: you plan to download 4.x million records? Even with a sleep interval of 1-2 seconds between downloads, which is short (I think), it may take a whole year, since the download itself will also consume some seconds: assuming ~6 seconds per record, you will get 10 records per minute, or 600 per hour, i.e ~12000/day - you need a year to get them all once. How often do you plan to hit the oai.zbmath.org server per second with requests? Are you OK with such a "long running" process? Just curious...

physikerwelt commented 2 years ago

I am not a lawyer and I agree with you that the situation is not intuitiv. Especially since one can use the DOI field to join data from semantic scholar or crossref. However one still is not allowed to redistribute the merged data. I have double checked that zbMATH Open can not expose titles and authors via APIs without to break German law.

Tha API is capable of providing the dataset in a few hours. I tested that last week so one can get all data quite quick. The import to wikibase is the long running task. The issue is about developing the software to import records from zbMATH Open importing everything is subject to another discussion. It is not entirely clear to me if importing everything is a good idea or if a lazy approach is preferable.

sedimentation-fault commented 2 years ago

O.K., I don't want to get into lengthy debates about this here, especially since you are not zbMATH. :-) But I do have some remarks and I urge you to consider them seriously in your project:

  1. Common sense says: no matter how capable the API is, imported records that lack title and/or author information will be useless. To see this, just pause for a moment and ask yourself: "Why are we doing all this?" You do this for researchers - and no researcher will tell you that such a crippled record is of any use.

  2. "zbMATH Open can not expose titles and authors via APIs without to break German law." Well, there is some possibility that exposing titles and authors would break a law that forbids the dissemination of databases. Maybe there is a conflict there. No matter what the conflict is, common sense again suggests that it should be allowed to either expose, or add/combine title/author information - and disseminate the combined records. I thus strongly suggest that your consortium fights for this right in the german courts. There are some fights that you cannot win with code - and this is one of them. You must stand up and fight it in the courts.

The TODO list above should be updated with a new, high priority item:

Fight in the courts for the right to use titles/authors, no matter how we got them!

In a sane society, this right would be self-evident - but we don't live in one, so this is the way to go.

physikerwelt commented 2 years ago

Thank you for your opinion. As said, I am not a lawyer and therefore this is out of scope. I think you ambition to improve the legal situation is nobel, however this is not our expertise. There are other initiatives with the required legal expertise to go into these issues. In this project we need to respect rules and regulations.

physikerwelt commented 1 year ago

This is now in the making for over a year. @LizzAlice can you estimate how long it will take to complete this task?

LizzAlice commented 1 year ago

I would think that this would take 1-2 months. However, if I should do it, I would like to push it until after my link prediction is in beta stage.

physikerwelt commented 12 months ago

@LizzAlice I feel we have different tickets for the same task. Can you close duplicates?