virajpu / androjena

Automatically exported from code.google.com/p/androjena
0 stars 0 forks source link

Execution vs validation times #14

Closed GoogleCodeExporter closed 8 years ago

GoogleCodeExporter commented 8 years ago
Hello again ;)

As I already said in a previous post, I'm developing a semantic and rule-based 
system with androJena.

First, I create a generic rule reasoner fed by the set of microOWL rules, that 
is used to create an inference model using a OWL-based schema + data:

    GenericRuleReasoner reasoner = new GenericRuleReasoner(microOWLrules);
    reasoner.setOWLTranslation(true);
    reasoner.setTransitiveClosureCaching(true);
    InfModel localInfModel = ModelFactory.createInfModel(reasoner, schemaAndData); 

The rule set is dynamic, that is, 3rd parties can add new rules to the existing 
rule set. This is done by means of

    reasoner.addRules(rules); 

Within this context, I'm facing several questions.

Response times
I'm also trying to measure how long does androJena take to perform several 
operations, in different contexts (that is, ranging from 1 to 10 user-defined 
rules having from 1 to 10 conditions in the antecedent).

Although I get sound results for the validation process (see attached file), 
however, results for rule execution time are quite similar independently of the 
number of rules/conditions. I thought that, as in the model validation test, 
rule execution would employ more time as the number of rules/conditions 
increase, but it doesn't seem like this in my tests.

Here, these results (mean values in nonoseconds over 10 repetitions):

      1    2    3    4    5    6    7    8    9   10
1  61.5 61.4 61.9 61.7 62.4 61.0 61.1 61.8 62.0 62.6
2  61.4 61.7 61.1 62.3 66.6 62.0 62.0 65.5 62.3 62.5
3  62.0 62.2 61.9 62.4 62.9 63.9 64.9 64.2 62.5 62.4
4  62.2 62.5 62.1 62.5 62.4 61.8 62.2 62.7 63.3 62.6
5  62.5 65.7 62.0 62.2 62.3 62.8 65.5 63.7 63.4 62.6
6  63.0 63.0 62.2 63.0 62.9 63.0 64.7 63.6 63.4 62.7
7  62.9 63.3 63.7 62.8 63.3 65.4 64.9 63.5 63.2 63.9
8  65.1 64.3 63.2 63.0 62.0 60.4 60.7 61.4 59.3 60.0
9  62.1 62.2 60.7 60.7 60.6 62.8 63.5 64.0 63.3 63.7
10 64.1 63.2 64.1 63.5 65.8 63.5 80.1 65.1 63.2 63.1

Does this seem ok for you? Am I doing anything wrong?

(I invoke the execution of the rules using "inferenceModel.rebind()" method. I 
also check that rules can also be executed using "inferenceModel.reset()" but 
the same kind of results are obtained using it).

Thank you very much for your help.
Josué

Original issue reported on code.google.com by baca...@gmail.com on 28 May 2012 at 2:43

Attachments:

GoogleCodeExporter commented 8 years ago
As I said in issue #13, you will have better luck by directing this kind of 
questions to the Jena community: you can find thousands of questions and 
replies from the Jena developers on the jena-dev yahoo group: 
http://tech.dir.groups.yahoo.com/group/jena-dev

That said, in my opinion, androjena performance shouldn't be benchmarked 
against the number of inference rules, because it's not necessarily 
proportional to the computational effort of the reasoning engine. You could use 
the number of inferred triples instead, for example.

Besides, do you recreate the inference model for each repetition? I ask this 
because if you are adding rules dynamically during the benchmark, these rules 
aren't applied even if you call rebind or reset, as you reported in issue #13; 
so you would be reapplying the same set of rules on every repetition, obviously 
getting similar results each time.

I hope my suggestions will help you solve the mystery; if you keep getting the 
same results, tell me and I'll dig deeper.

One last thing: the issue tracker is better suited for technical/functional 
issues. Questions like this one should be posted on the androjena group: 
https://groups.google.com/group/androjena

Bye!
lorenzo

Original comment by loreca...@gmail.com on 31 May 2012 at 5:37