Open andreww opened 6 years ago
Looking at what we get, the only thing we get from contentmine's eupmc summary is the journal title.
Sampling a few of the papers, it appears that some journals don't use keywords, and for those that do (e.g. Nature Communications see https://www.nature.com/articles/s41467-018-03297-7) the information is not in the fulltext XML file.
I used "ACT DL" in the past but haven't touched it for a few years. The API classifies text according to Dewey Decimal Classification (DDC). As visible in the (XML or JSON) response to this query example quite a few disciplines are suggested and a confidence level is assigned. I would start allowing just one discipline per paper using the attribute best
from the python library.
I've had some success clustering papers together using an autoencoder. However, it is hard to know how accurate things are without labelled training data. Keywords are incredibly unreliable.
For some planned use-cases (e.g. #19) we need to be able to determine the research area (discipline) for each paper we process. Hopefully this is exposed in the data we gather from EuroPMC - in which case this can be passed into the URL processing part of the code so we can tag each processed URL with "used by research area" or similar. If not, we probably need some other way of gathering this information. Can we use the doi itself to say anything (e.g. by resolving the journal and going from there)?
One potentially important question is should we insist on each paper belonging to a single discipline, or do we need to allow each paper to belong to multiple disciplines? If we allow multiple disciplines how should we represent this in the "output" data?