ML-KULeuven / problog

ProbLog is a Probabilistic Logic Programming Language for logic programs with probabilities.
https://dtai.cs.kuleuven.be/problog/
297 stars 34 forks source link

Learning a Markov process - how to avoid infinite loop? #73

Closed nathanielvirgo closed 2 years ago

nathanielvirgo commented 2 years ago

This is another question, although possibly there's a bug behind it, I don't know. I'm sorry for asking multiple questions in the same day.

I can implement a simple Markov process like this, and it seems to work

% initial state

0.5::x(0,1); 0.5::x(0,2).

% update kernel

0.9::x(T,1); 0.1::x(T,2) :- T>0, T1 is T-1, x(T1,1).
0.1::x(T,1); 0.9::x(T,2) :- T>0, T1 is T-1, x(T1,2).

evidence(x(5,1)).
query(x(2,_)).

Now I would like to try learning the update kernel, using the EM algorithm. However, if I enter the following code into the 'learning' web interface, it seems to go into an infinite loop:

% initial state

0.5::x(0,1); 0.5::x(0,2).

% update kernel

t(_)::x(T,1); t(_)::x(T,2) :- T>0, T1 is T-1, x(T1,1).
t(_)::x(T,1); t(_)::x(T,2) :- T>0, T1 is T-1, x(T1,2).

with the following in the 'examples' box:

evidence(x(3,1)).

I tried changing T>0 to between(1,3,T) in the code, in case it's trying to instantiate an infinite number of random variables, but that didn't help.

Any hints on what to do or on how to diagnose the issue would be appreciated.

wenchiyang commented 2 years ago

Hi,

There are not enough examples so the parameters converge slowly, leading to the error of “ProbLog exceeded time or memory limit” on the web interface. You can prevent this by providing more examples, for instance,

evidence(x(5,1)). evidence(x(4,1)). evidence(x(3,1)). evidence(x(2,1)).

evidence(x(5,2)). evidence(x(4,1)). evidence(x(3,1)). evidence(x(2,1)).

evidence(x(3,1)).

You can also run problog locally with “-n”, “-d” flags to set a stricter stop criterion, and use “—logger" flag for intermediate results (see https://problog.readthedocs.io/en/latest/cli.html#learning-from-interpretations-lfi )

On 2 Dec 2021, at 14:14, nathanielvirgo @.**@.>> wrote:

This is another question. I'm sorry for asking multiple ones in the same day.

I can implement a simple Markov process like this, and it seems to work

% initial state

0.5::x(0,1); 0.5::x(0,2).

% update kernel

0.9::x(T,1); 0.1::x(T,2) :- T>0, T1 is T-1, x(T1,1). 0.1::x(T,1); 0.9::x(T,2) :- T>0, T1 is T-1, x(T1,2).

evidence(x(5,1)). query(x(2,_)).

Now I would like to try learning the update kernel, using the EM algorithm. However, if I enter the following code into the 'learning' web interface, it seems to go into an infinite loop:

% initial state

0.5::x(0,1); 0.5::x(0,2).

% update kernel

t()::x(T,1); t()::x(T,2) :- T>0, T1 is T-1, x(T1,1). t()::x(T,1); t()::x(T,2) :- T>0, T1 is T-1, x(T1,2).

with the following in the 'examples' box:

evidence(x(3,1)).

I tried changing T>0 to between(1,3,T) in the code, in case it's trying to instantiate an infinite number of random variables, but that didn't help.

Any hints on what to do or on how to diagnose the issue would be appreciated.

— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHubhttps://github.com/ML-KULeuven/problog/issues/73, or unsubscribehttps://github.com/notifications/unsubscribe-auth/ALJYBMZFNLRNEZTOCDQJOWDUO4FC3ANCNFSM5JGIWV5A. Triage notifications on the go with GitHub Mobile for iOShttps://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675 or Androidhttps://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub.

[ { @.": "http://schema.org", @.": "EmailMessage", "potentialAction": { @.": "ViewAction", "target": "https://github.com/ML-KULeuven/problog/issues/73", "url": "https://github.com/ML-KULeuven/problog/issues/73", "name": "View Issue" }, "description": "View this Issue on GitHub", "publisher": { @.": "Organization", "name": "GitHub", "url": "https://github.com" } } ]

nathanielvirgo commented 2 years ago

Thank you, that's very helpful. I can see now that if I run it locally it eventually does converge (but very slowly, which in retrospect is not surprising), and if I use the -v flag I can watch the convergence happening. I guess I can close this now.