A data processing pipeline that schedules and runs content harvesters, normalizes their data, and outputs that normalized data to a variety of output streams. This is part of the SHARE project, and will be used to create a free and open dataset of research (meta)data. Data collected can be explored at https://osf.io/share/, and viewed at https://osf.io/api/v1/share/search/. Developer docs can be viewed at https://osf.io/wur56/wiki
Just a note to explain what this is doing -- it now looks like biomedcentral is using Springer's API to expose its content. So, this PR replaces the old biomedcentral code with similar code to springer's harvester, and only normalizes those with the publisher "biomedcentral."
Just a note to explain what this is doing -- it now looks like biomedcentral is using Springer's API to expose its content. So, this PR replaces the old biomedcentral code with similar code to springer's harvester, and only normalizes those with the publisher "biomedcentral."