Closed bnewbold closed 6 years ago
@bnewbold thanks for your interest!
I found it really helpful to grab in bulk form.
Indeed! I'm hoping Crossref starts releasing their database dumps, so we don't have to keep going through the laborious steps of recreating them by millions of API calls :smile_cat: . However, until then, it'd be nice to update this repo and data release.
Shortly after we did our extraction in April 2017, Crossref's API began returning citation information. This is really useful but makes the API responses much larger. It also opens up the possibility that we could produce to DOI-to-DOI citation table, which I'm sure would appeal to many users.
Anyways, I wasn't planning on updating these records until I needed newer data for my research (which could be never). Rerunning the API queries will probably take several weeks. You may run into issues. You'll need an internet connection with good uptime! @bnewbold if this is something you're interested in, we'd love if you could open a PR with the updates. We'd also love to add your revised DB dump to the figshare. You'd become an author on the figshare dataset and potentially other work we do in the future that makes use of this data. What do you think?
I'd also likely be interested in extracted the citation graph from the enlarged dump.
I have download.py
chugging away on a reliable host (up to a couple million so far, estimate says 240+ hours to go, but I won't be surprised if it takes longer), and the works (at least some of them) do indeed contain citation information.
I have
download.py
chugging away on a reliable host
Nice! IIRC correctly the order of the works is somewhat chronological. The newer works tend to have more metadata, so things slow down and become more error prone closer to the end. Happy to help if anything breaks.
the works (at least some of them) do indeed contain citation information.
Nice! I believe these will be the I4OC citations (more accurately "references"), which should be much more prevalent than the OpenCitations corpus we processed in greenelab/opencitations
.
The script is about 80% complete. It halted last week when the local hard disk ran out of space (from a bug in an unrelated script), but has been restarted with the most recent cursor position just now.
I ran into two problems: the other script misbehaved again (disk filled up), and it seemed like the dump hadn't continued where it had left off when I restarted back on Jan 5th (even though I specified a cursor). I'm not sure if the cursor is local (mongodb) or remote (crossref API), but the mongo container had been restarted and it had been more than a few days between failure and restart, either of which could have caused problems. Anyways, I've restarted from scratch with this process on it's own (SSD) disk. It looks like there are now 94,035,712 remote records, which I think is also a bump from the last attempt a couple weeks back. Will update here again when this completes (expecting late January).
it seemed like the dump hadn't continued where it had left off when I restarted back on Jan 5th (even though I specified a cursor).
@bnewbold thanks for the update. My understanding of the cursor is that it's remote. I'm not sure how long cursors are retained... the cursor could have been retired after some days of inactivity. Ideally, specifying an invalid cursor would trigger an error and not proceed silently.
It looks like there are now 94,035,712 remote records
Nice! nearing 100 million.
Anyways, I've restarted from scratch with this process
It'll be nice to have metadata through all of 2017. There may be a few articles published in 2017 that still have been deposited in Crossref, but I hope not too many
The script completed successfully yesterday (2018-01-21) after about 11 days:
94035712/94035712 [270:33:11<00:00, 32.70it/s]Finished queries with 93,585,242 works in db
I'm not sure what the discrepancy is between 93,585,242 and 94,035,712; some works get skipped intentionally?
I'm dumping to .json.xz
now, which looks like it will take a couple hours, after which i'll upload to archive.org and you can review before pushing to figshare. I've named the file data/mongo-export/crossref-works.2018-01-21.json.xz
to reduce possible confusion. I'll try to run the .ipynb
files to update other formats, though i'm worried this machine won't have enough RAM. Either way i'll do a PR with, eg, the sha256 checksums updates.
I'm not sure what the discrepancy is between 93,585,242 and 94,035,712; some works get skipped intentionally?
What immediately comes to mind is if Crossref had multiple records for the same DOI. That could make the query number larger than the MongoDB number:
https://github.com/greenelab/crossref/blob/768a49ba1d8ba1971f00471950514716a9f699c8/download.py#L37
Do you still have the log? I wonder if we should preserve this as well? It could probably help us diagnose the discrepancy.
I do have the complete log (including my first failed attempt). Skimming through it, doesn't look like it will answer this question, but i'll include it when I upload.
mongo-export is still running, about 3/4 complete now 22 hours in. The new corpus is significantly larger, presumably because of citation and maybe other new metadata being included. I estimate it will be 250 GB of uncompressed JSON, or about 25 GB compressed (xz, default settings).
Uploaded here: https://archive.org/download/crossref_doi_dump_201801/crossref-works.2018-01-21.json.xz
File is 30980612708 bytes (~29 GB), sha256 is 28075b3abf7724a284467000d3b2eba720f97967bb7b81bad62b7e9c0b24c761
.
Logs are uploaded to the same item, but might take a few minutes to appear (while main file is still being hashed and replicated).
Running the .ipynb
files as-is didn't work from the command line. From within the conda environment:
(crossref) bnewbold@ia601101$ jupyter-run 1.works-to-dataframe.ipynb
---------------------------------------------------------------------------
NameError Traceback (most recent call last)
<ipython-input-1-7f436dceca0c> in <module>()
4 "cell_type": "markdown",
5 "metadata": {
----> 6 "deletable": true,
7 "editable": true
8 },
NameError: name 'true' is not defined
Traceback (most recent call last):
File "/schnell/crossref-dump/miniconda3/envs/crossref/bin/jupyter-run", line 11, in <module>
sys.exit(main())
File "/schnell/crossref-dump/miniconda3/envs/crossref/lib/python3.6/site-packages/jupyter_core/application.py", line 266, in launch_instance
return super(JupyterApp, cls).launch_instance(argv=argv, **kwargs)
File "/schnell/crossref-dump/miniconda3/envs/crossref/lib/python3.6/site-packages/traitlets/config/application.py", line 658, in launch_instance
app.start()
File "/schnell/crossref-dump/miniconda3/envs/crossref/lib/python3.6/site-packages/jupyter_client/runapp.py", line 115, in start
raise Exception("jupyter-run error running '%s'" % filename)
Exception: jupyter-run error running '1.works-to-dataframe.ipynb'
(crossref) bnewbold@ia601101$ jupyter nbconvert --to python 1.works-to-dataframe.ipynb
[NbConvertApp] Converting notebook 1.works-to-dataframe.ipynb to python
[NbConvertApp] Writing 1589 bytes to 1.works-to-dataframe.py
(crossref) bnewbold@ia601101$ python 1.works-to-dataframe.py
Traceback (most recent call last):
File "1.works-to-dataframe.py", line 57, in <module>
doi_writer.writerow((doi, work['type'], issued))
KeyError: 'type'
My interest was in getting the .json.xz
flie, not the derived files, but it looks like the notebook files talk to (local) mongodb directly instead of parsing the bulk file. @dhimmel, can you bulk-load the dump into a local mongo instance and generate the derived files there?
I'll hold off on tearing down the mongo database for a few days in case it ends up being useful.
Two other infrastructure notes I had from setting up this run:
$PATH
with export PATH=/PROJECT/crossref-dump/miniconda3/bin:$PATH
before running source activate crossref
Running the .ipynb files as-is didn't work from the command line. From within the conda environment:
I don't think jupyter-run
is the right command (despite it's name). In the past, I've used nbconvert like
jupyter nbconvert --inplace --execute --ExecutePreprocessor.timeout=-1 1.works-to-dataframe.ipynb
Do you think you could open a PR with at least the update to:
I'd like for you to be in the commit history. If you get the notebooks to run, great. Otherwise I can try to do it by importing the dump.
Here's what I get trying to use the above jupyter line:
(crossref) bnewbold@ia601101$ head environment.yml -n1
name: crossref
(crossref) bnewbold@ia601101$ conda-env list
# conda environments:
#
crossref * /schnell/crossref-dump/miniconda3/envs/crossref
root /schnell/crossref-dump/miniconda3
(crossref) bnewbold@ia601101$ jupyter nbconvert --inplace --execute --ExecutePreprocessor.timeout=-1 1.works-to-dataframe.ipynb
[NbConvertApp] Converting notebook 1.works-to-dataframe.ipynb to notebook
[NbConvertApp] Executing notebook with kernel: conda-env-crossref-py
Traceback (most recent call last):
File "/schnell/crossref-dump/miniconda3/envs/crossref/lib/python3.6/site-packages/jupyter_client/kernelspec.py", line 201, in get_kernel_spec
resource_dir = d[kernel_name.lower()]
KeyError: 'conda-env-crossref-py'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/schnell/crossref-dump/miniconda3/envs/crossref/bin/jupyter-nbconvert", line 11, in <module>
load_entry_point('nbconvert==5.1.1', 'console_scripts', 'jupyter-nbconvert')()
File "/schnell/crossref-dump/miniconda3/envs/crossref/lib/python3.6/site-packages/jupyter_core/application.py", line 266, in launch_instance
return super(JupyterApp, cls).launch_instance(argv=argv, **kwargs)
File "/schnell/crossref-dump/miniconda3/envs/crossref/lib/python3.6/site-packages/traitlets/config/application.py", line 658, in launch_instance
app.start()
File "/schnell/crossref-dump/miniconda3/envs/crossref/lib/python3.6/site-packages/nbconvert/nbconvertapp.py", line 305, in start
self.convert_notebooks()
File "/schnell/crossref-dump/miniconda3/envs/crossref/lib/python3.6/site-packages/nbconvert/nbconvertapp.py", line 473, in convert_notebooks
self.convert_single_notebook(notebook_filename)
File "/schnell/crossref-dump/miniconda3/envs/crossref/lib/python3.6/site-packages/nbconvert/nbconvertapp.py", line 444, in convert_single_notebook
output, resources = self.export_single_notebook(notebook_filename, resources, input_buffer=input_buffer)
File "/schnell/crossref-dump/miniconda3/envs/crossref/lib/python3.6/site-packages/nbconvert/nbconvertapp.py", line 373, in export_single_notebook
output, resources = self.exporter.from_filename(notebook_filename, resources=resources)
File "/schnell/crossref-dump/miniconda3/envs/crossref/lib/python3.6/site-packages/nbconvert/exporters/exporter.py", line 171, in from_filename
return self.from_file(f, resources=resources, **kw)
File "/schnell/crossref-dump/miniconda3/envs/crossref/lib/python3.6/site-packages/nbconvert/exporters/exporter.py", line 189, in from_file
return self.from_notebook_node(nbformat.read(file_stream, as_version=4), resources=resources, **kw)
File "/schnell/crossref-dump/miniconda3/envs/crossref/lib/python3.6/site-packages/nbconvert/exporters/notebook.py", line 31, in from_notebook_node
nb_copy, resources = super(NotebookExporter, self).from_notebook_node(nb, resources, **kw)
File "/schnell/crossref-dump/miniconda3/envs/crossref/lib/python3.6/site-packages/nbconvert/exporters/exporter.py", line 131, in from_notebook_node
nb_copy, resources = self._preprocess(nb_copy, resources)
File "/schnell/crossref-dump/miniconda3/envs/crossref/lib/python3.6/site-packages/nbconvert/exporters/exporter.py", line 308, in _preprocess
nbc, resc = preprocessor(nbc, resc)
File "/schnell/crossref-dump/miniconda3/envs/crossref/lib/python3.6/site-packages/nbconvert/preprocessors/base.py", line 47, in __call__
return self.preprocess(nb,resources)
File "/schnell/crossref-dump/miniconda3/envs/crossref/lib/python3.6/site-packages/nbconvert/preprocessors/execute.py", line 207, in preprocess
cwd=path)
File "/schnell/crossref-dump/miniconda3/envs/crossref/lib/python3.6/site-packages/nbconvert/preprocessors/execute.py", line 188, in start_new_kernel
km.start_kernel(**kwargs)
File "/schnell/crossref-dump/miniconda3/envs/crossref/lib/python3.6/site-packages/jupyter_client/manager.py", line 244, in start_kernel
kernel_cmd = self.format_kernel_cmd(extra_arguments=extra_arguments)
File "/schnell/crossref-dump/miniconda3/envs/crossref/lib/python3.6/site-packages/jupyter_client/manager.py", line 175, in format_kernel_cmd
cmd = self.kernel_spec.argv + extra_arguments
File "/schnell/crossref-dump/miniconda3/envs/crossref/lib/python3.6/site-packages/jupyter_client/manager.py", line 87, in kernel_spec
self._kernel_spec = self.kernel_spec_manager.get_kernel_spec(self.kernel_name)
File "/schnell/crossref-dump/miniconda3/envs/crossref/lib/python3.6/site-packages/jupyter_client/kernelspec.py", line 203, in get_kernel_spec
raise NoSuchKernel(kernel_name)
jupyter_client.kernelspec.NoSuchKernel: No such kernel named conda-env-crossref-py
jupyter_client.kernelspec.NoSuchKernel: No such kernel named conda-env-crossref-py
Ah I've hit that annoying bug as well in https://github.com/Anaconda-Platform/nb_conda_kernels/issues/34#issuecomment-299003153.
If you add --ExecutePreprocessor.kernel_name=python
, it might be fixed.
Also I just came across https://github.com/elifesciences/datacapsule-crossref by @de-code which seems to have also downloaded the works data from Crossref.
Yes, I've just updated the download recently. I will try to share the dump. But the whole works dumps is about 32 GB. Looking for an easy way to get that into Figshare (from a headless server). (I also have just citation links which is a more manageable <3 GB)
Data downloaded January 2018 now available in Figshare: https://doi.org/10.6084/m9.figshare.5845554
And just citation links: https://doi.org/10.6084/m9.figshare.5849916
There is also an open issue / request for Crossref to provide something similar: https://github.com/CrossRef/rest-api-doc/issues/271
In case anybody is interested, i've started another dump using the exact same code path today. Not sure if i'll continue updating dumps in the future, but I wanted a fresher one and might as well share. I cross-posted at https://github.com/elifesciences/datacapsule-crossref/issues/1 as well.
Notes since last time:
i've started another dump using the exact same code path today
Nice!
updates to DOIs that occur during the dump can result in duplicate entries
Hmm. I thought this repository should be replacing duplicate DOI entries in the Mongo database. In other words, the Mongo DB should only contain the most recently added metadata for a DOI:
That is unfortunate that the iterative queries return duplicate DOIs (I thought the cursor was to prevent / I hope other DOIs aren't missing). However, replace_one
should prevent our exports from being contaminated with duplicates? @bnewbold have you seen otherwise?
@dhimmel I hadn't noticed this behavior (DOI upsert) of these scripts (which I haven't read, just run blindly).
My most recent dump completed and is available here: https://archive.org/download/crossref_doi_dump_201809
SHA256 available here (feel free to PR/merge): https://github.com/bnewbold/crossref/commit/9f99032a91179ba7369464754d827c9f5176650d
My most recent dump completed and is available here.
Awesome, added checksum in https://github.com/greenelab/crossref/pull/11 / https://github.com/greenelab/crossref/commit/48a8589674cd862fa1901365d4c7ee4bb4a35e31, updated README in https://github.com/greenelab/crossref/commit/cc79bd0489aa8c18cb7203f59db9cca739a7ff4e, and tweeted:
Looking for @Crossref DOI metadata (including citation links) for ~100 million scholarly articles? See Byran Newbold's September 2018 bulk export hosted by the @internetarchive. https://archive.org/details/crossref_doi_dump_201809
Looks like the queries took 16 days (2018-09-05 to 2018-09-20). File size is 33.2 GB, up from 28.9 GB for the January 2019 release.
@bnewbold I have a slight preference that if you make another update in the future, for you to open a new issue.
Thanks @bnewbold for the effort to upload newer versions of the dataset to the internet archive pages :relaxed: Since it took me a while to also find the newer ones, I'll link them here below for people who land on this issue:
"Official" crossref dump 2020-04: https://archive.org/details/crossref-doi-metadata-20200408 Self-crawled with this repo 2019-09: https://archive.org/details/crossref_doi_dump_201909
possible to get single file?
In April 2017 @dhimmel uploaded a bulk snapshot of Crossref metadata to figshare (where it was assigned DOI
10.6084/m9.figshare.4816720.v1
).While this metadata can be scraped from the Crossref API by anybody (eg, using the scripts in this repository), I found it really helpful to grab in bulk form.
I'm curious whether this dump could be updated on an annual or quarterly basis. I don't have a particular need for the the data to be versioned (eg, assigned sequential
.v2
,.v3
DOIs at figshare), but that would probably help with discovery for other folks and generally be a best practice.If nobody has time to do such an update I will probably run the scripts from this repository and push to archive.org at: https://archive.org/details/ia_biblio_metadata.