aig-upf / tarski

Tarski - An AI Planning Modeling Framework
Apache License 2.0
59 stars 19 forks source link

STRIPS Check and Match Tree Implementation #118

Open haz opened 3 years ago

haz commented 3 years ago

Closes #117

Related to discussion @ https://github.com/aig-upf/tarski/discussions/112

codecov-commenter commented 3 years ago

Codecov Report

Merging #118 (ae475c3) into devel (ebfda1c) will not change coverage. The diff coverage is n/a.

Impacted file tree graph

@@           Coverage Diff           @@
##            devel     #118   +/-   ##
=======================================
  Coverage   97.71%   97.71%           
=======================================
  Files          51       51           
  Lines        3108     3108           
  Branches      121      121           
=======================================
  Hits         3037     3037           
  Misses         55       55           
  Partials       16       16           

Continue to review full report at Codecov.

Legend - Click here to learn more Δ = absolute <relative> (impact), ø = not affected, ? = missing data Powered by Codecov. Last update ebfda1c...ae475c3. Read the comment docs.

haz commented 3 years ago

@beckydvn : Can you give this branch a go to see if we have a speedup on generation? Alternatively, you can let me know what the best way to test it would be and I'll have a go.

If the problem is STRIPS, we should automagically get the speedup.

beckydvn commented 3 years ago

@haz Sure, I'll give it a go :) If you want to see for yourself: if you're just testing the speed of trace generation, you can try making a large Vanilla Sampler generator. As it stands, I think anything more than a few hundred traces gets pretty slow, so hopefully that's faster on this branch.

beckydvn commented 3 years ago

try making a large Vanilla Sampler generator

vanilla = VanillaSampling(dom=dom, prob=prob, plan_len=10, num_traces=1000)

That should do the trick.

beckydvn commented 3 years ago

When I try to run that line though, I'm getting an error here (in the Generator setup): https://github.com/QuMuLab/macq/blob/8a81da216685fa0e06327fa60ff7e371b1b74b1c/macq/generate/pddl/generator.py#L60 More specifically, that AttributeError: 'Atom' object has no attribute 'subformulas'

Here's the stack trace: File "/home/rebecca/macq/macq/generate/pddl/generator.py", line 89, in __init__ self.instance = GroundForwardSearchModel(self.problem, operators)

File "/home/rebecca/macq/venv/venv/lib/python3.8/site-packages/tarski/search/model.py", line 35, in __init__ self.fluents = self.compute_fluents()

File "/home/rebecca/macq/venv/venv/lib/python3.8/site-packages/tarski/search/model.py", line 42, in compute_fluents fluents |= set(op.precondition.subformulas) AttributeError: 'Atom' object has no attribute 'subformulas

haz commented 3 years ago

What's the problem ID you're using?

beckydvn commented 3 years ago

Just two local files for blocks world :)

haz commented 3 years ago

Working with this problem, and this sampling...

traces = VanillaSampling(dom='src/d.pddl', prob='src/p.pddl', plan_len = 20, num_traces = 100)

...we see about a 3x speedup. Bug fixed that you found. I'll try to convert to numeric ID's to improve the performance further.

haz commented 3 years ago

Ok, that's about as far as I can push it. Want to have a look @gfrances / @miquelramirez ?

If we want to bump the performance on random trace sampling from here, it would require better progression (strips specific, bit vectors, etc).

miquelramirez commented 3 years ago

Hi @haz,

@gfrances is on holidays, I presume for the next couple weeks. I will try to distract a moment today to look over the PR.

What I could see very quickly is that one of the tests is failing, probably because the match tree is changing the order of expansions and finding a goal state earlier than the test expects. Do we have tests to check that the match tree isn't pruning actions? I remember we had bugs like that back in lapkt which were quite hard to diagnose.

haz commented 3 years ago

A great point... How do you feel about random tests? I.e., generate random traces and confirm the MT and naive approaches coincide on action applicability.

miquelramirez commented 3 years ago

Hi @haz,

That sounds like a good concept to me.

It would be useful though to capture inputs (printing to stdout) that makes the test unsat, so the situation can be recreated with little effort.

haz commented 3 years ago

Yep, both listing the path taken and the seed to generate things again reliably. Will circle back here once I have something implemented...