coli-saar / utool

Utool is the Swiss Army Knife of Underspecification. It is a GUI and library written in Java for performing computations with dominance graphs and other formalisms, which are used to represent semantic ambiguities in natural language processing.
http://www.coli.uni-saarland.de/projects/chorus/utool/
3 stars 0 forks source link

rewriting rules for remove equivallences #8

Closed arademaker closed 11 months ago

arademaker commented 1 year ago

Hi,

I am trying to use the -f option with

java -jar ~/hpsg/utool/target/utool-3.4.1-SNAPSHOT-jar-with-dependencies.jar server --port 2802 --warmup -f ~/hpsg/utool/src/main/resources/examples/erg-equivalences.xml

I got the error

Exception in thread "main" de.saar.chorus.domgraph.chart.lethal.TokenMgrError: Lexical error at line 1, column 1.  Encountered: "<" (60), after : ""
    at de.saar.chorus.domgraph.chart.lethal.RewritingSystemParserTokenManager.getNextToken(RewritingSystemParserTokenManager.java:1019)
    at de.saar.chorus.domgraph.chart.lethal.RewritingSystemParser.jj_scan_token(RewritingSystemParser.java:711)
    at de.saar.chorus.domgraph.chart.lethal.RewritingSystemParser.jj_3_2(RewritingSystemParser.java:434)
    at de.saar.chorus.domgraph.chart.lethal.RewritingSystemParser.jj_3_1(RewritingSystemParser.java:441)
    at de.saar.chorus.domgraph.chart.lethal.RewritingSystemParser.jj_2_1(RewritingSystemParser.java:229)
    at de.saar.chorus.domgraph.chart.lethal.RewritingSystemParser.Input(RewritingSystemParser.java:37)
    at de.saar.chorus.domgraph.chart.lethal.RewritingSystemParser.read(RewritingSystemParser.java:23)
    at de.saar.chorus.domgraph.utool.CommandLineParser.parse(CommandLineParser.java:283)
    at de.saar.chorus.domgraph.utool.Utool.main(Utool.java:58)

Any idea @alexanderkoller? I am trying to remove the equivalent formulas in the theorem prover, but after reading https://aclanthology.org/P06-1052/, I am excited to delegate to Utool the job! Please, do you have any direction?

alexanderkoller commented 1 year ago

Hi @arademaker, yes, delegating this to Utool is exactly why we wrote that paper. :) For the equivalences that can be easily phrased as permutation of neighboring quantifiers, the redundancy elimination in Utool will be dramatically more efficient than running a theorem prover.

If I remember correctly (this was 15 years ago), we replaced the XML format for the equivalences by a more human-readable specification language when we added weakest readings in 2010. Apparently we never bothered to update the documentation or the example.

There is an example for the new syntax in one of the unit tests. The parser is in this file. As you'll see, you can still specify equivalence rules; you may have to spell out the old equivalence groups into pairwise equivalences, but you can use the wildcard "*" to indicate an arbitrary symbol. There are also weakening rules (which see the paper I linked above), and annotators, which assign polarities to positions in a term. If you only want to use the equivalences, you could pick a polarity x and then say that for every symbol f, if the parent polarity is x, then the children all have polarity x. This will be a bit of manual labor, sorry about that. [Or maybe you don't have to - it could be that if you don't specify a polarity annotator for a certain symbol, the children will simply all be assigned the neutral polarity. Polarities only matter if you do weakening, because the direction of weakening changes in negative polarity.]

Let me know if this works, perhaps for a small fragment of MRSs.

arademaker commented 1 year ago

Thank you! I will read the material and report here any progress.

arademaker commented 12 months ago

I don't understand the rules yet. Can you guide me with an example?

No people is sitting around a fire at night

I got 12 possible MRSs; the first one is:

[ TOP: h0
  INDEX: e2 [ e SF: prop TENSE: pres MOOD: indicative PROG: + PERF: - ]
  RELS: < [ _no_q<0:2> LBL: h4 ARG0: x3 [ x PERS: 3 NUM: sg IND: + ] RSTR: h5 BODY: h6 ]
          [ _people_n_1<3:9> LBL: h7 ARG0: x3 ]
          [ _sit_v_1<13:20> LBL: h1 ARG0: e2 ARG1: x3 ]
          [ _around_p_state<21:27> LBL: h1 ARG0: e8 [ e SF: prop TENSE: untensed MOOD: indicative PROG: - PERF: - ] ARG1: e2 ARG2: x9 [ x PERS: 3 NUM: sg IND: + ] ]
          [ _a_q<28:29> LBL: h10 ARG0: x9 RSTR: h11 BODY: h12 ]
          [ _fire_n_1<30:34> LBL: h13 ARG0: x9 ]
          [ _at_p_temp<35:37> LBL: h1 ARG0: e14 [ e SF: prop TENSE: untensed MOOD: indicative PROG: - PERF: - ] ARG1: e2 ARG2: x15 ]
          [ _night_n_of<38:43> LBL: h16 ARG0: x15 ]
          [ def_implicit_q<38:43> LBL: h17 ARG0: x15 RSTR: h18 BODY: h19 ] >
  HCONS: < h0 qeq h1 h5 qeq h7 h11 qeq h13 h18 qeq h16 > ]

Using my naive theorem-prover-based prune method, from 6 solutions, I reduced to 4:

I have 6 solutions
take_equivs init
take_equivs exhausted

 (0/6) ∃ x9, _fire_n_1 x9 ∧ (∀ x3, _people_n_1 x3 → ¬(∃ x15, _night_n_of x15 ∧ (∃ e2 e8 e14, _sit_v_1 e2 x3 ∧ _around_p_state e8 e2 x9 ∧ _at_p_temp e14 e2 x15)))
 (1/6) ∃ x9, _fire_n_1 x9 ∧ (∃ x15, _night_n_of x15 ∧ (∀ x3, _people_n_1 x3 → ¬(∃ e2 e8 e14, _sit_v_1 e2 x3 ∧ _around_p_state e8 e2 x9 ∧ _at_p_temp e14 e2 x15)))
 (2/6) ∀ x3, _people_n_1 x3 → ¬(∃ x9, _fire_n_1 x9 ∧ (∃ x15, _night_n_of x15 ∧ (∃ e2 e8 e14, _sit_v_1 e2 x3 ∧ _around_p_state e8 e2 x9 ∧ _at_p_temp e14 e2 x15)))
 (3/6) ∃ x15, _night_n_of x15 ∧ (∀ x3, _people_n_1 x3 → ¬(∃ x9, _fire_n_1 x9 ∧ (∃ e2 e8 e14, _sit_v_1 e2 x3 ∧ _around_p_state e8 e2 x9 ∧ _at_p_temp e14 e2 x15)))

I tried to use the rules I found in the Utool repo:

% java -jar ~/hpsg/utool/target/utool-3.4.1-SNAPSHOT-jar-with-dependencies.jar solve --input-codec mrs-prolog --output-codec plugging-oz -f ~/hpsg/utool/testsuites/rewriting-10/rules -o lixo.oz results/0.mrs.pl
Exception in thread "main" de.saar.chorus.domgraph.chart.lethal.TokenMgrError: Lexical error at line 174, column 13.  Encountered: "1" (49), after : "/"
    at de.saar.chorus.domgraph.chart.lethal.RewritingSystemParserTokenManager.getNextToken(RewritingSystemParserTokenManager.java:1019)
    at de.saar.chorus.domgraph.chart.lethal.RewritingSystemParser.jj_scan_token(RewritingSystemParser.java:711)
    at de.saar.chorus.domgraph.chart.lethal.RewritingSystemParser.jj_3_14(RewritingSystemParser.java:408)
    at de.saar.chorus.domgraph.chart.lethal.RewritingSystemParser.jj_2_14(RewritingSystemParser.java:320)
    at de.saar.chorus.domgraph.chart.lethal.RewritingSystemParser.term(RewritingSystemParser.java:143)
    at de.saar.chorus.domgraph.chart.lethal.RewritingSystemParser.WeakeningRule(RewritingSystemParser.java:65)
    at de.saar.chorus.domgraph.chart.lethal.RewritingSystemParser.Input(RewritingSystemParser.java:45)
    at de.saar.chorus.domgraph.chart.lethal.RewritingSystemParser.read(RewritingSystemParser.java:23)
    at de.saar.chorus.domgraph.utool.CommandLineParser.parse(CommandLineParser.java:283)
    at de.saar.chorus.domgraph.utool.Utool.main(Utool.java:58)

So the syntax /2 or /1 is not working. I copied the rules file and removed the lines with that syntax:

% java -jar ~/hpsg/utool/target/utool-3.4.1-SNAPSHOT-jar-with-dependencies.jar solve --input-codec mrs-prolog --output-codec plugging-oz -s -f rules -o lixo.oz results/0.mrs.pl
The input graph is normal.
I will filter redundancy and weakest readings.

Solving graph ... it is solvable.
Splits in chart: 16
Time to build chart: 2 ms
Number of solved forms: 6

--- start reducing ---
Computing fta ... Converted, 6 ms.
Computing ctt ... Computing ctt type 1 rules ... ctt type 1 rules, 3 ms.
Computing ctt type 2 rules ... ctt type 2 rules, 1 ms.
Computing ctt type 3 rules ... Computing specialize ... 0 of 0 applicable, 0 ms.
Computing specialize ... 0 of 0 applicable, 0 ms.
ctt type 3 rules, 2 ms.
CTT computed, 25 rules, 8 ms.
Computing pre-image ... pre-image computed, 37 rules, 2 ms.
Computing diff ... Computed, 1 ms.
--- done reducing ---
Found 6 solved forms.
Time spent on extraction: 3 ms
Total runtime: 5 ms (1200 sfs/sec; 833 microsecs/sf))

The three quantifiers of the sentence are in the rules file:

% rg "_no_q|_a_q|def_implicit_q" rules
6:0: _a_q(0, 0)
7:+: _a_q(+, +)
8:-: _a_q(-, -)
108:0: def_implicit_q(0, 0)
109:+: def_implicit_q(0, +)
110:-: def_implicit_q(0, -)
126:0: _no_q(0, 0)
127:+: _no_q(-, -)
128:-: _no_q(+, +)

The resulting file lixo.oz has six solutions, so no redundancy was removed.

%%  autogenerated by Utool 3.4 (see github.com/coli-saar/utool/ for details)
[[plug(h6 h10) plug(h12 h17) plug(h19 h1) plug(h18 h16) plug(h11 h13) plug(h5 h7)]
[plug(h6 h17) plug(h19 h10) plug(h12 h1) plug(h11 h13) plug(h18 h16) plug(h5 h7)]
[plug(h12 h4) plug(h6 h17) plug(h19 h1) plug(h18 h16) plug(h5 h7) plug(h11 h13)]
[plug(h12 h17) plug(h19 h4) plug(h6 h1) plug(h5 h7) plug(h18 h16) plug(h11 h13)]
[plug(h19 h4) plug(h6 h10) plug(h12 h1) plug(h11 h13) plug(h5 h7) plug(h18 h16)]
[plug(h19 h10) plug(h12 h4) plug(h6 h1) plug(h5 h7) plug(h11 h13) plug(h18 h16)]
]
arademaker commented 12 months ago

Oh... I got it, I reduced to 4 plugs after adding the rules:

[+] _a_q(X, def_implicit_q(Y,Z)) -> def_implicit_q(Y, _a_q(X,Z))
[+] def_implicit_q(X, _a_q(Y,Z)) -> _a_q(Y, def_implicit_q(X,Z))

But I am not sure if the rules above with the polarities I used make sense, I understand that def_implicit_q can be exchanged with _a_q .

%%  autogenerated by Utool 3.4 (see github.com/coli-saar/utool/ for details)
[[plug(h6 h10) plug(h12 h17) plug(h19 h1) plug(h18 h16) plug(h11 h13) plug(h5 h7)]
[plug(h6 h17) plug(h19 h10) plug(h12 h1) plug(h11 h13) plug(h18 h16) plug(h5 h7)]
[plug(h12 h4) plug(h6 h17) plug(h19 h1) plug(h18 h16) plug(h5 h7) plug(h11 h13)]
[plug(h19 h4) plug(h6 h10) plug(h12 h1) plug(h11 h13) plug(h5 h7) plug(h18 h16)]
]%
alexanderkoller commented 12 months ago

I think you want to use = rules and not -> rules.

Could you attach the MRS in the format that you're feeding to Utool? Then I could have a look. I don't think the format you pasted above fits with either of our input codecs.

arademaker commented 12 months ago

This is the Prolog version of the MRS, the content of the file results/0.mrs.pl above:

psoa(h1,e3,[
rel('_no_q',h4,[attrval('ARG0',x6),attrval('RSTR',h7),attrval('BODY',h5)]),rel('_people_n_1',h8,[attrval('ARG0',x6)]),
rel('_sit_v_1',h2,[attrval('ARG0',e3),attrval('ARG1',x6),attrval('ARG2',h9)]),rel('_around_p',h10,[attrval('ARG0',e11),attrval('ARG1',x6)]),
rel('mod',h2,[attrval('ARG0',e13),attrval('ARG1',e3),attrval('ARG2',x12)]),rel('_a_q',h14,[attrval('ARG0',x12),attrval('RSTR',h16),attrval('BODY',h15)]),
rel('_fire_n_1',h17,[attrval('ARG0',x12)]),rel('_at_p_temp',h17,[attrval('ARG0',e19),attrval('ARG1',x12),attrval('ARG2',x18)]),
rel('_night_n_of',h20,[attrval('ARG0',x18)]),rel('def_implicit_q',h21,[attrval('ARG0',x18),attrval('RSTR',h23),attrval('BODY',h22)])],
hcons([qeq(h1,h2),qeq(h7,h8),qeq(h9,h10),qeq(h16,h17),qeq(h23,h20)]))

PS: not the same; the Ace parser produced the MRS above, this one from LKB.. but they are isomorphic.

alexanderkoller commented 12 months ago

For this MRS, I only get five solutions:

The input graph is normal.

Solving graph ... it is solvable.
Splits in chart: 15
Time to build chart: 1 ms
Number of solved forms: 5

[def_implicit_q('_night_n_of' '_a_q'('_fire_n_1&_at_p_temp' '_no_q'('_people_n_1' '_sit_v_1&mod'('_around_p')))) 
def_implicit_q('_night_n_of' '_no_q'('_people_n_1' '_a_q'('_fire_n_1&_at_p_temp' '_sit_v_1&mod'('_around_p')))) 
'_a_q'(def_implicit_q('_night_n_of' '_fire_n_1&_at_p_temp') '_no_q'('_people_n_1' '_sit_v_1&mod'('_around_p'))) 
'_no_q'('_people_n_1' def_implicit_q('_night_n_of' '_a_q'('_fire_n_1&_at_p_temp' '_sit_v_1&mod'('_around_p')))) 
'_no_q'('_people_n_1' '_a_q'(def_implicit_q('_night_n_of' '_fire_n_1&_at_p_temp') '_sit_v_1&mod'('_around_p')))]

Could you send me the one with 12?

What you can do is to use = to indicate the equivalences, like so:

_a_q(X, def_implicit_q(Y,Z)) = def_implicit_q(Y, _a_q(X,Z))
def_implicit_q(X, _a_q(Y,Z)) = _a_q(Y, def_implicit_q(X,Z))

start annotation: +
neutral annotation: 0

This will allow you to (a) not specify how the polarity annotations percolate through the symbols (then everything just has neutral polarity - doesn't matter for equivalences), and (b) use wildcards (*) to compress a lot of rules into one.

alexanderkoller commented 12 months ago

Hi,

for the attached MRS (which you sent me by email), I'm getting six solved forms, and the equivalence below reduces that to four. Is this what you were looking for? Do you have any further questions?

_a_q(X, def_implicit_q(Y,Z)) = def_implicit_q(Y, _a_q(X,Z))

start annotation: +
neutral annotation: 0

Note that if you specify the equivalence (=) and not the weakening rule (->), you only have to specify it once. Utool will only retain a solved form if it isn't equivalent with some other solved form. In the example, the two solved forms in which def_implicit_q is a parent of _a_q are removed.

(base) dirichlet:utool koller$ java -jar target/utool-3.4.1-SNAPSHOT-jar-with-dependencies.jar solve 0.mrs.pl  -I mrs-prolog -O term-oz -f equiv.txt  -s
The input graph is normal.
I will filter redundancy and weakest readings.

Solving graph ... it is solvable.
Splits in chart: 16
Time to build chart: 1 ms
Number of solved forms: 6

--- start reducing ---
Computing fta ... Converted, 3 ms.
Computing ctt ... Computing ctt type 1 rules ... ctt type 1 rules, 1 ms.
Computing ctt type 2 rules ... ctt type 2 rules, 1 ms.
Computing ctt type 3 rules ... Computing specialize ... 0 of 0 applicable, 0 ms.
Computing specialize ... 1 of 1 applicable, 0 ms.
ctt type 3 rules, 2 ms.
CTT computed, 21 rules, 6 ms.
Computing pre-image ... pre-image computed, 41 rules, 1 ms.
Computing diff ... Computed, 3 ms.
--- done reducing ---
['_no_q'('_people_n_1' '_a_q'('_fire_n_1' def_implicit_q('_night_n_of' '_sit_v_1&_around_p_state&_at_p_temp'))) 
'_a_q'('_fire_n_1' '_no_q'('_people_n_1' def_implicit_q('_night_n_of' '_sit_v_1&_around_p_state&_at_p_temp'))) 
'_a_q'('_fire_n_1' def_implicit_q('_night_n_of' '_no_q'('_people_n_1' '_sit_v_1&_around_p_state&_at_p_temp'))) 
def_implicit_q('_night_n_of' '_no_q'('_people_n_1' '_a_q'('_fire_n_1' '_sit_v_1&_around_p_state&_at_p_temp')))]Found 4 solved forms.
Time spent on extraction: 2 ms
Total runtime: 3 ms (1333 sfs/sec; 750 microsecs/sf))

0.mrs.pl.txt

alexanderkoller commented 11 months ago

I'll close this for now, feel free to reopen if you have further questions.

arademaker commented 11 months ago

I have two questions:

The rules should only work with direct nestings with two quantifiers, right? So, each rule only references two quantifiers.

What are the good strategies to find rules? Indeed, I can reason over the semantics given to the ERG quantifiers and decide what orders/nestings are redundant. I could also take a sample of sentences, run our logic-based equivalence test with a theorem prover, and enumerate some equivalences to inspect the possible redundant nestings. Do you have ideas about how to produce the rewritten rules?

arademaker commented 11 months ago

One more:

How should I read the rule below? The use of [ ] has special meaning or it is just a syntax for the wildcard?

https://github.com/coli-saar/utool/blob/77012052d9f31c2bf889c3c5ece0f24925958bab/src/test/groovy/de/saar/chorus/domgraph/chart/lethal/RewritingSystemParserTest.groovy#L34

alexanderkoller commented 11 months ago

The rules should only work with direct nestings with two quantifiers, right? So, each rule only references two quantifiers.

I think that is correct. Also note that they can only permute two neighboring quantifiers, i.e. where one is the immediate child of the other in the tree.

What are the good strategies to find rules? Indeed, I can reason over the semantics given to the ERG quantifiers and decide what orders/nestings are redundant. I could also take a sample of sentences, run our logic-based equivalence test with a theorem prover, and enumerate some equivalences to inspect the possible redundant nestings. Do you have ideas about how to produce the rewritten rules?

A lot of the equivalence rules that we experimented with involved definites and proper names - i.e. "quantifiers" whose "scope" really doesn't matter. If you already have a mapping from ERG quantifiers to predicate logic, then you probably also have a notion of what counts as an existential quantifier, and you could freely permute those with each other too.

We played around with Knuth-Bendix completion of an equivalence system at some point, but that was a bigger hassle than it seemed worth, and didn't end up pursuing it. I personally wouldn't aim for completeness of the equivalence rule system (it won't be "complete" in the sense of never leaving two logically equivalence readings behind, anyway), but just for a reasonable tradeoff in manual labor and ambiguity reduction. If you really care about having the absolute minimum set of readings left over, you could first run Utool's redundancy elimination to get rid of 90% of the redundancy and then only run the theorem prover on the rest.

alexanderkoller commented 11 months ago

One more:

How should I read the rule below? The use of [ ] has special meaning or it is just a syntax for the wildcard?

https://github.com/coli-saar/utool/blob/77012052d9f31c2bf889c3c5ece0f24925958bab/src/test/groovy/de/saar/chorus/domgraph/chart/lethal/RewritingSystemParserTest.groovy#L34

I explained this above: The asterisk is a wildcard that matches any symbol. Useful if the other symbol is e.g. a proper name, which permuts with anything.

I don't remember why we used square brackets rather than round brackets, but I'm sure we had a reason. Perhaps it was to clarify that the symbol that the asterisk matches doesn't have to be unary.