Since the inference is slow, it is desirable to pre-process the inference and store all the solutions into a triple-store cache (Redis or a snapshot). Since there is no inference in the cache, the cache technology can be simpler - as long as it supports SPARQL (is this a hard requirement ?)
Since the inference is slow, it is desirable to pre-process the inference and store all the solutions into a triple-store cache (Redis or a snapshot). Since there is no inference in the cache, the cache technology can be simpler - as long as it supports SPARQL (is this a hard requirement ?)