NCATSTranslator / TranslatorArchitecture

MIT License
9 stars 11 forks source link

Describe the process(es) for continuous integration (CI) of KP data into an aggregate knowledge graph (AKG) #3

Open patrickkwang opened 4 years ago

patrickkwang commented 4 years ago

Questions to answer:

Questions not to answer:

dkoslicki commented 4 years ago

Additionally, some "legacy KG's" (eg. ARAX/KG1 and ARAX/KG2) utilize information ETL'd from non-translator KS's. A process for CI of non-Translator KS's may also want to added to this issue (or pulled out as a separate one).

cbizon commented 4 years ago

@cmungall @deepakunni3 what kinds of interfaces do you think would make sense here?

cmungall commented 4 years ago

My answer is predicated on my assumption that there will be many purpose specific AKGs (as well as potentially a Translator 'uber' AKG). The assumption is that a local AKG is pragmatically required for certain inference algorithms; the local AKG does not need to be the union of all KGs, but it should have the parts its needs locally to avoid network latency on high-frequency traversal operations.

The source KGs may be available via queries (one-hop APIs, query interfaces such as sparql/cypher) or via dumps.

I think the operations to enable integration should be:

I would also point people at our command line tool robot (http://robot.obolibrary.org/) for making aggregate ontologies. The distinction between ontology and KG is fuzzy. However, I think the use cases and approaches are analogous.