Closed candlecao closed 5 hours ago
Taking the Chinese Traditional Knowledge Base as an example, see its SPARQL query interface: http://www.usources.cn:8080/dcmusic/sparql. (It's slow to log on) I have already activated the reasoning mechanism for the entire big ontology. Now it works well. For example,
define input:inference 'urn:owl.ccmusicrules'
prefix ctm: <https://lib.ccmusic.edu.cn/ontologies/chinese_traditional_music#>
#select (count (?name) as ?n)
select ?name
from <https://lib.ccmusic.edu.cn/graph/music>
where {
?s ?p "西安鼓乐";
ctm:musicType_Instrument/ctm:nameOfMusicTypeOrInstrument ?name
filter (LANG(?name) != "py")}
define input:inference 'urn:owl.ccmusicrules'
is a switch to open or shield reasoning mechanism. You can shield it by adding #
before the code and compare the retrieved result.
Based on this, we can prompt ChatGPT to generate SPARQL to retrieve more complete results.
For example, in Wikidata, genre A is subClassOf genre B. If you ask a question "Find the works of genre B", based on an embedded ontology asserting that
<GenreA> rdfs:subClassOf <GenreB>
, and the activated reasoning mechanism, the retrieved results will contain not only the entities denoted by genre B but also those denoted by genre A.