logic-and-learning-lab / Popper

An inductive logic programming system
MIT License
205 stars 33 forks source link

Can Popper generalize? #74

Closed pauljurczak closed 1 year ago

pauljurczak commented 1 year ago

I would like Popper to find this:

f(X, Y, Z, 2) :- Z is 0.
f(X, Y, Z, 3) :- Y is 0.
f(X, Y, Z, 0) :- Y > 0, Z > 0.

or another simple solution to a problem defined with this bias file:

max_vars(5).

head_pred(f, 4).
body_pred(i0, 1).
body_pred(i1, 1).
body_pred(i2, 1).
body_pred(i3, 1).
body_pred(i4, 1).

type(f, (int, int, int, int)).
type(i0, (int,)).
type(i1, (int,)).
type(i2, (int,)).
type(i3, (int,)).
type(i4, (int,)).

direction(f, (in, in, in, out)).
direction(i0,(out,)).
direction(i1,(out,)).
direction(i2,(out,)).
direction(i3,(out,)).
direction(i4,(out,)).

this bk file:

i0(0).
i1(1).
i2(2).
i3(3).
i4(4).

and this exs file:

pos(f(1, 2, 2, 0)).
pos(f(1, 4, 0, 2)).
pos(f(1, 1, 3, 0)).
pos(f(1, 3, 1, 0)).
pos(f(1, 0, 4, 3)).

pos(f(1, 2, 1, 0)).
pos(f(1, 0, 3, 3)).
pos(f(1, 3, 0, 2)).
pos(f(1, 1, 2, 0)).

I'm getting a far from minimal solution:

Precision:1.00 Recall:1.00 TP:9 FN:0 TN:0 FP:0 Size:45
f(A,B,C,D):- i3(B),i0(D),i1(A),i1(C).
f(A,B,C,D):- i3(D),i1(A),i4(C),i0(B).
f(A,B,C,D):- i1(A),i2(B),i1(C),i0(D).
f(A,B,C,D):- i2(C),i1(A),i0(D),i2(B).
f(A,B,C,D):- i0(D),i1(A),i2(C),i1(B).
f(A,B,C,D):- i0(C),i2(D),i4(B),i1(A).
f(A,B,C,D):- i1(A),i0(D),i1(B),i3(C).
f(A,B,C,D):- i3(C),i0(B),i1(A),i3(D).
f(A,B,C,D):- i2(D),i1(A),i0(C),i3(B).

Is there a way to generalize and minimize it?

celinehocquette commented 1 year ago

You should declare the predicate geq/2 in the bias file:

body_pred(geq, 2).
type(geq,(int,int)).
direction(geq,(in,in)).

and in the bk:

geq(A,B) :- A>=B.

You could also add to the bias files the following flags to allow rules with a head variable which do not appear in body, or rules with variables which appear only once (see #14):

non_datalog.
allow_singletons.

It seems now Popper learns a simple and overly general hypothesis. It seems more examples, in particular negative ones, might be needed to learn the hypothesis you showed. It also seems the first argument of f/4 is unnecessary as it is always 1 in the examples.

pauljurczak commented 1 year ago

Definition of f comes from another module, which may produce redundant arguments. I added eq/2 predicate and the result is the same. The problem definition is now the same as this one given by this SyGuS file:

(set-logic LIA)

(synth-fun f ((color Int) (sortAsc Int) (sortDesc Int)) Int 
  ((I Int) (B Bool))
  ((I Int (color sortAsc sortDesc 0 1 2 3 4
           (ite B I I)))
   (B Bool ((<= I I) (= I I))))
)

(declare-var color Int)
(declare-var sortAsc Int)
(declare-var sortDesc Int)

(constraint (= (f 1 2 2) 0))
(constraint (= (f 1 4 0) 2))
(constraint (= (f 1 1 3) 0))
(constraint (= (f 1 3 1) 0))
(constraint (= (f 1 0 4) 3))

(constraint (= (f 1 2 1) 0))
(constraint (= (f 1 0 3) 3))
(constraint (= (f 1 3 0) 2))
(constraint (= (f 1 1 2) 0))

(check-synth)

This file when fed to cvc5 solver, produces a reasonable solution:

(define-fun f ((color Int) (sortAsc Int) (sortDesc Int)) Int (ite (<= 1 sortAsc) (ite (<= 1 sortDesc) 0 2) 3))

which is structurally fairly close to the one I posted in my original question above.

pauljurczak commented 1 year ago

Do you think magicpopper or NumSynth would perform better and produce something closer to the desired solution?

celinehocquette commented 1 year ago

There are many hypotheses consistent with the examples provided, among which your desired solution. Popper is biased to return a shortest one (it returns optimal solutions which are solutions with minimal number of literals). Therefore, unless there are examples to refute the simple hypotheses, Popper will return a simple hypothesis.

MagicPopper and NumSynth suffer from the same issue. MagicPopper simply returns the following short hypothesis when singleton and non_datalog flags are enabled:

f(A,B,C,D).

I am unsure what we can do apart from providing negative examples to eliminate these hypotheses. For instance, I added the following negative examples:

neg(f(1, 0, 1, 0)).
neg(f(1, 3, 0, 0)).
neg(f(1, 0, 2, 2)).
neg(f(1, 0, 4, 2)).
neg(f(1, 1, 3, 3)).
neg(f(1, 2, 1, 3)).
neg(f(1, 3, 1, 3)).
neg(f(1, 3, 1, 4)).

And NumSynth learned the following hypothesis (Popper learns similar a one):

f(A,B,0,2).
f(A,0,C,3).
f(A,B,C,0):- geq(C,1),geq(B,1).
pauljurczak commented 1 year ago

Thank you. Adding negative examples is the way to go. I took a lazy route by using the output of a module, which produces positive examples only, but this can be fairly easily rectified.

Is there a sound strategy of generating negative examples? With small integers as arguments, I could possibly cover the whole domain of f, but I suspect that it would be counterproductive. Are there any publications on this topic?

pauljurczak commented 1 year ago

Also, after adding the negative examples you suggested, I'm getting:

********** SOLUTION **********
Precision:1.00 Recall:1.00 TP:9 FN:0 TN:9 FP:0 Size:10
f(A,B,C,D):- i0(B),i3(D).
f(A,B,C,D):- i2(D),geq(B,D).
f(A,B,C,D):- geq(C,A),geq(B,A),i0(D).

when running the example script:

from popper.util import Settings, print_prog_score
from popper.loop import learn_solution

settings = Settings(kbpath='task3')
prog, score, stats = learn_solution(settings)

if prog != None:
    print_prog_score(prog, score)

I'm probably missing some settings, in order to reproduce the result you've posted.

celinehocquette commented 1 year ago

This is a good question. Most work use a closed-world assumption and generate as negative examples all examples which are not given as positive. When intractable, one can sample a subset of these. Alternatively, negative examples can be generated as near-misses.

I am not aware of publications on this topic. The following work discusses it briefly: Lavrac, Nada, Saso Dzeroski, and Marko Grobelnik. "Experiments in learning nonrecursive definitions of relations with LINUS." Josef Stefan Institute, Yugoslavia, Tech. Rep. (1990). @andrewcropper would you know any paper related to the generation of negative examples?

Sorry, the hypothesis I showed above was generated with NumSynth. Popper needs slightly different negative examples, such as the ones below.

neg(f(1, 0, 1, 0)).
neg(f(1, 3, 0, 0)).
neg(f(1, 0, 2, 2)).
neg(f(1, 0, 4, 2)).
neg(f(1, 3, 4, 2)).
neg(f(1, 1, 3, 3)).
neg(f(1, 2, 1, 3)).
neg(f(1, 3, 1, 3)).
neg(f(1, 3, 1, 4)).

In this case it returns the following hypothesis, which seems to have similar meaning:

f(A,B,C,D):- i2(D),i0(C).
f(A,B,C,D):- i3(D),i0(B).
f(A,B,C,D):- i0(D),geq(B,A),geq(C,A).
yangdinglou commented 1 year ago

I would like to give some of my understanding and suggestion.

  1. On the negative example side: since the necessity of negative examples is to avoid the hypothesis being too general, one way to obtain the specified hypothesis with positive-only examples is to continue the search when obtaining one solution (of course more care on designing the bias is needed.)
  2. As for the non_datalog., if my understanding is correct, the correctness of Popper's pruning for non-datalog programs are not guaranteed. One possible solution is to add placeholder predicates like the following.
    anynumber(_).
    type(anynumber, (int,)).

    Of course again, you need some encoding to make the hypothesis with less placeholder better. But that may be a general solution to learn non-datalog programs, which I used in one of my projects based on Popper.