Closed robknapen closed 2 months ago
I agree
I think it is getting a bit more clear now, will close this issue.
I see the triple store (where we keep knowledge graphs using our formal ontologies) as a possible knowledge input for any kind of LLM (or NLP) functionality that we want to add to the system. But it does not have to be the only source.
According to the (Brugge) architecture the sole connection between the NLQ component and the triple store is the SPARQL API. So we can try to have the LLM generate proper SPARQL queries (might need few shot examples in the prompt) or use fixed queries via tool calling (might be more reliable, but less flexible).