Open hoijui opened 9 months ago
DING, DING, DING, DING, ...
:O Now, writing the above, I got an idea! There is actually a fourth option: We could use a similar approach like W3ID does, to host the data. There is one (or optionally a few -> redundant) git repos, that contain/host all the RDF data. Multiple parties that aggregate the data, have push-access to it, and regularly, push to it, in an automated fashion, when crawling/generating the data. This means, both data-gatherers and individual projects could push data. This allows for a somewhat distributed-ish, but at the very least decentralized/federated power over the RDF data, and as a huge beneficial side-effect, it would allow to efficiently distribute the data-gathering load.
Current Data Aggregation Process
The problem
The idea of LinkedData, and very much ours for OKH too, is to support a distributed data system. Furthermore, all RDF data - more specifically each subject - is uniquely identifiable by its IRI. An IRI is simply a unicode-version of a URL. \ This pushes two requirements onto us:
We could choose to do one of two things:
In theory, there is a third option: \ Each project generates their RDF by themselfs in a CI, and then hosts it permanently (at least each release version of it plus the latest development one). That though, is very, very unlikely, unstable, difficult to maintain and update, .... and only possible for git-hosted (or other SCM-hosted) projects. \ -> not really an option.