Open guregu opened 2 years ago
Yes it needs a call-in API. Will look into it...
On Tue, 27 Sept 2022, 15:41 guregu, @.***> wrote:
I'm working on the last major feature for the WASM ports: returning one query result at a time instead of findall'ing the query. Ideally we could just use stdin and wait for some input to continue the query, but unfortunately the majority of WASM runtimes don't implement stdin as a stream, so this approach doesn't work. I was considering doing something clever with history_getch as an exported host function but I think that'll just lead it to block forever in the browser.
Is there a way to control the redo process for queries with the current API?
I am imagining an API kind of like this:
// works like pl_eval, except returns a handle to the query instead of waiting for toplevel input query pl_query(prolog, const char expr);// redo a given query, returning false if there are no more choice pointsbool pl_redo(prolog, *query);
And the WASM host would call it like so:
class Prolog { ptr; // *prolog ptr
// example: javascript generator function that returns one result at a tinme *query(expr) { // eval once and grab ptr to query handle const query = this.pl_query(this.ptr, expr); do { const answer = /* read from stdin */; yield answer; } while (this.pl_redo(this.ptr, query)) }}
// usageconst query = pl.query("member(X, [1,2,3]).");for (const answer of query) { console.log(answer) // X=1, then X=2 next iteration, etc}
Ideally these would be silent, not dumping vars or printing prompts so it's easy to parse the output. We can use Prolog code to marshal the answers to an easy-to-parse format, so we don't need a C results API, just a way to run redos.
— Reply to this email directly, view it on GitHub https://github.com/trealla-prolog/trealla/issues/46, or unsubscribe https://github.com/notifications/unsubscribe-auth/AFNKSEU7GIBYFRJCWUXNMOTWAKCJLANCNFSM6AAAAAAQWNSJMY . You are receiving this because you are subscribed to this thread.Message ID: @.***>
Pushed roughly (doesn't return a query object) what you asked for to devel. Give it a go! For example
pl_eval(pl, src); // How tpl.c currently does it
pl_query(pl, src); // How it can be done programmatrically
do { } while (pl_redo(pl));
On Tue, Sep 27, 2022 at 3:41 PM guregu @.***> wrote:
I'm working on the last major feature for the WASM ports: returning one query result at a time instead of findall'ing the query. Ideally we could just use stdin and wait for some input to continue the query, but unfortunately the majority of WASM runtimes don't implement stdin as a stream, so this approach doesn't work. I was considering doing something clever with history_getch as an exported host function but I think that'll just lead it to block forever in the browser.
Is there a way to control the redo process for queries with the current API?
I am imagining an API kind of like this:
// works like pl_eval, except returns a handle to the query instead of waiting for toplevel input query pl_query(prolog, const char expr);// redo a given query, returning false if there are no more choice pointsbool pl_redo(prolog, *query);
And the WASM host would call it like so:
class Prolog { ptr; // *prolog ptr
// example: javascript generator function that returns one result at a tinme *query(expr) { // eval once and grab ptr to query handle const query = this.pl_query(this.ptr, expr); do { const answer = /* read from stdin */; yield answer; } while (this.pl_redo(this.ptr, query)) }}
// usageconst query = pl.query("member(X, [1,2,3]).");for (const answer of query) { console.log(answer) // X=1, then X=2 next iteration, etc}
Ideally these would be silent, not dumping vars or printing prompts so it's easy to parse the output. We can use Prolog code to marshal the answers to an easy-to-parse format, so we don't need a C results API, just a way to run redos.
— Reply to this email directly, view it on GitHub https://github.com/trealla-prolog/trealla/issues/46, or unsubscribe https://github.com/notifications/unsubscribe-auth/AFNKSEU7GIBYFRJCWUXNMOTWAKCJLANCNFSM6AAAAAAQWNSJMY . You are receiving this because you are subscribed to this thread.Message ID: @.***>
Awesome, much appreciated. I did some testing with the JS library and it's looking good, was able to iterate through results just as imagined.
I did find an odd bug that makes my toplevel act weird sometimes, trying to isolate it at the moment. Upon backtracking it seems like an argument to a predicate gets replaced with the wrong variable:
person(socrates).
person(plato).
mortal(X) :- person(X).
js_toplevel
module from here: https://github.com/guregu/trealla/blob/c1bc80678d7dd3d975ab81b32f69e080b7a040c9/library/js_toplevel.pl
$ ./tpl --ns -g 'use_module(library(js_toplevel)), js_ask("mortal(X)")' soc.pl
{"result":"success","answer":{"X":"socrates"}}
% expected plato as well
Maybe an issue with backtracking on variable_names
variables from read_term?
I'll dig some more.
[user:4] CALL js_ask("mortal(X)")
[js_toplevel:5] CALL catch(read_term_from_chars(_0,[variable_names(_1)],"mortal(X)"),_3,(write('\x2\\x3\'),write_result(error,_3),flush_output))
[js_toplevel:6] CALL '$catch'(call(read_term_from_chars(_0,[variable_names(_1)],"mortal(X)")),_3,call((write('\x2\\x3\'),write_result(error,_3),flush_output)))
[js_toplevel:7] EXIT '$catch'(call(read_term_from_chars(_0,[variable_names(_1)],"mortal(X)")),_3,call((write('\x2\\x3\'),write_result(error,_3),flush_output)))
[js_toplevel:8] CALL call(read_term_from_chars(_0,[variable_names(_1)],"mortal(X)"))
[js_toplevel:9] EXIT call(read_term_from_chars(_0,[variable_names(_1)],"mortal(X)"))
[js_toplevel:10] CALL read_term_from_chars(_0,[variable_names(_1)],"mortal(X)")
[js_toplevel:11] EXIT read_term_from_chars(mortal(_9),[variable_names(['X'=_9])],"mortal(X)")
[js_toplevel:12] CALL '$drop_barrier'
[js_toplevel:13] EXIT '$drop_barrier'
[js_toplevel:14] CALL '$block_catcher'(0)
[js_toplevel:15] EXIT '$block_catcher'(0)
[js_toplevel:16] EXIT catch(read_term_from_chars(mortal(_9),[variable_names(['X'=_9])],"mortal(X)"),_3,(write('\x2\\x3\'),write_result(error,_3),flush_output))
[js_toplevel:17] CALL query(mortal(_9),['X'=_9],_4,_5)
[js_toplevel:18] CALL write('\x2\')
[js_toplevel:19] EXIT write('\x2\')
[js_toplevel:20] CALL catch(mortal(_9),_11,true)
[js_toplevel:21] CALL '$catch'(call(mortal(_9)),_11,call(true))
[js_toplevel:22] EXIT '$catch'(call(mortal(_9)),_11,call(true))
[js_toplevel:23] CALL call(mortal(_9))
[js_toplevel:24] EXIT call(mortal(_9))
[js_toplevel:25] CALL mortal(_9)
[user:26] CALL person(_9) <------------------------------------------ normal
[user:27] EXIT person(socrates)
[js_toplevel:28] CALL '$drop_barrier'
[js_toplevel:29] EXIT '$drop_barrier'
[js_toplevel:30] CALL '$block_catcher'(1)
[js_toplevel:31] EXIT '$block_catcher'(1)
[js_toplevel:32] EXIT person(socrates)
[js_toplevel:33] CALL '$soft_cut'
[js_toplevel:34] EXIT '$soft_cut'
[js_toplevel:35] CALL _12=true
[js_toplevel:36] EXIT true=true
[js_toplevel:37] CALL write('\x3\')
[js_toplevel:38] EXIT write('\x3\')
[js_toplevel:39] CALL query_status(true,_11,_4)
[js_toplevel:40] CALL nonvar(_11)
[js_toplevel:41] FAIL nonvar(_11)
[js_toplevel:42] REDO query_status(true,_11,_4)
[js_toplevel:43] EXIT query_status(true,_11,success)
[js_toplevel:44] CALL nonvar(_11)
[js_toplevel:45] FAIL nonvar(_11)
[js_toplevel:46] CALL _5=['X'=socrates]
[js_toplevel:47] EXIT ['X'=socrates]=['X'=socrates]
[js_toplevel:48] EXIT query_status(true,mortal(socrates),success)
[js_toplevel:49] CALL write_result(success,['X'=socrates])
[js_toplevel:50] CALL solution_json(['X'=socrates],_20)
[js_toplevel:51] CALL maplist(var_json,['X'=socrates],_23)
[js_toplevel:52] CALL maplist_(['X'=socrates],_23,var_json)
[js_toplevel:53] CALL call(var_json,'X'=socrates,_29)
[js_toplevel:54] EXIT call(var_json,'X'=socrates,_29)
[js_toplevel:55] CALL var_json('X'=socrates,_29)
[js_toplevel:56] CALL atom_chars('X',_33)
[js_toplevel:57] EXIT atom_chars('X',"X")
[js_toplevel:58] CALL term_json(socrates,_35)
[js_toplevel:59] CALL atom(socrates)
[js_toplevel:60] EXIT atom(socrates)
[js_toplevel:61] CALL atom_chars(socrates,_37)
[js_toplevel:62] EXIT atom_chars(socrates,"socrates")
[js_toplevel:63] CALL !
[js_toplevel:64] EXIT !
[js_toplevel:65] EXIT term_json(socrates,string("socrates"))
[js_toplevel:66] CALL '$drop_barrier'
[js_toplevel:67] EXIT '$drop_barrier'
[js_toplevel:68] CALL maplist_([],_31,var_json)
[js_toplevel:69] EXIT maplist_([],[],var_json)
[js_toplevel:70] CALL once(phrase(json_chars(pairs([string("result")-string("success"),string("answer")-pairs([string(...)-string(...)])])),_21))
....... json DCG spam ........
[js_toplevel:1900] REDO call(mortal(_9))
[js_toplevel:1901] FAIL call(mortal(_9))
[js_toplevel:1902] REDO '$catch'(call(mortal(_9)),_11,call(true))
[js_toplevel:1903] FAIL '$catch'(call(mortal(_9)),_11,call(true))
true
; [js_toplevel:1896] REDO '$block_catcher'(1)
[js_toplevel:1897] FAIL '$block_catcher'(1)
[user:1898] REDO person(['X'=_9]) <---------------- got replaced with wrong var?
[user:1899] FAIL person(['X'=_9])
[js_toplevel:1900] REDO call(mortal(_9))
[js_toplevel:1901] FAIL call(mortal(_9))
[js_toplevel:1902] REDO '$catch'(call(mortal(_9)),_11,call(true))
[js_toplevel:1903] FAIL '$catch'(call(mortal(_9)),_11,call(true))
Playing around with it, you can remove the soft cut and still trigger it, seems that nested call/1
in catch/3
might be the culprit.
Moving the try/catch bit upwards fixes it.
Diff here: https://github.com/guregu/trealla/commit/8d691930947e7f8001ec362cf3a9b4b44dd6a4a2
Tried creating a more minimal repro but was not successful.
Anyway, this should be enough to get the basics working. JS is looking good, will release a new version in a little bit.
It works very nicely. I released a new version of the JS lib that takes advantage of it. It's very close to hitting v1.0 I think!
I think the final thing the WASM stuff needs is having this work kind of like task/N, or fork/0, except instead of using wait/0 we manually prod along each task (subquery) with redo requests from the host. We don't have to worry about synchronization here, we can leave it up to the host to deal with that.
The use case here is to let people asynchronously run queries against the same knowledge base. In the browser world it can be pretty unpredictable the order things run in sometimes.
// contrived example for current impl
const a = pl.query("between(0,9,X).");
const b = pl.query("between(10,19,N).");
await a.next();
await b.next();
await a.next(); // throws (invalid iterator)
If you want multiple async queries I would have to change the API to what you originally asked for, returning query handles or IDs. Let me know.
On Wed, 28 Sept 2022, 05:17 guregu, @.***> wrote:
It works very nicely. I released a new version of the JS lib that takes advantage of it. It's very close to hitting v1.0 I think!
I think the final thing the WASM stuff needs is having this work kind of like task/N, or fork/0, except instead of using wait/0 we manually prod along each task (subquery) with redo requests from the host. We don't have to worry about synchronization here, we can leave it up to the host to deal with that.
The use case here is to let people asynchronously run queries against the same knowledge base. In the browser world it can be pretty unpredictable the order things run in sometimes.
// contrived example for current implconst a = pl.query("between(0,9,X).");const b = pl.query("between(10,19,N).");await a.next();await b.next();await a.next(); // throws (invalid iterator)
— Reply to this email directly, view it on GitHub https://github.com/trealla-prolog/trealla/issues/46#issuecomment-1259943816, or unsubscribe https://github.com/notifications/unsubscribe-auth/AFNKSEQSVJCUCAHBUN7ZVITWANB5HANCNFSM6AAAAAAQWNSJMY . You are receiving this because you commented.Message ID: @.***>
Yes, please. That would be great. Feel free to tweak the API however you'd like, I can adapt the code from the current WASM stuff easily to it. Query handle can be anything as long as WASM can pass it (any pointer, or int32/64).
Took me a while to see the point of this stuff but now I can imagine an ad-hoc shared, distributed, federated knowledge-base. The social-memory complex (SMC). Maybe a tie-in to semantic Web ideas re Jos is doing.
On Wed, 28 Sept 2022, 07:16 guregu, @.***> wrote:
Yes, please. That would be great. Feel free to tweak the API however you'd like, I can adapt the code from the current WASM stuff easily to it. Query handle can be anything as long as WASM can pass it (any pointer, or int32/64).
— Reply to this email directly, view it on GitHub https://github.com/trealla-prolog/trealla/issues/46#issuecomment-1260063307, or unsubscribe https://github.com/notifications/unsubscribe-auth/AFNKSEX4W5DRVQAEVX5VRLDWANP3RANCNFSM6AAAAAAQWNSJMY . You are receiving this because you commented.Message ID: @.***>
It works wonderfully! Thank you so much @infradig. I'll release the updated libs soon. Just a tiny bit of polishing on my end and I think we are ready for trealla-js v1.0! I'll also update the Go stuff.
P.S.: love the name SMC ;) I think this will be a big plus for people looking to do logic stuff on the web. Trealla is a very compact binary so it's perfect for embedding. Exciting times ahead.
Just joking about the SMC. But the JS stuff is looking good. Will be happy to merge it in when you're cut v1.0, Good work.
On Wed, Sep 28, 2022 at 1:42 PM guregu @.***> wrote:
It works wonderfully! Thank you so much @infradig https://github.com/infradig. I'll release the updated libs soon. Just a tiny bit of polishing on my end and I think we are ready for trealla-js v1.0! I'll also update the Go stuff.
P.S.: love the name SMC ;) I think this will be a big plus for people looking to do logic stuff on the web. Trealla is a very compact binary so it's perfect for embedding. Exciting times ahead.
— Reply to this email directly, view it on GitHub https://github.com/trealla-prolog/trealla/issues/46#issuecomment-1260346169, or unsubscribe https://github.com/notifications/unsubscribe-auth/AFNKSEQVHVTKF5EFEBWQGLTWAO5EBANCNFSM6AAAAAAQWNSJMY . You are receiving this because you were mentioned.Message ID: @.***>
I've cut beta releases for the JS and Go libs and they are looking good. I'm going to try to nail down some nuances in the toplevel API before the big one dot zero (as in here https://github.com/guregu/trealla-js/issues/4).
Feel free to merge the changes for the redo API into main. API is solid and I don't foresee needing anything more.
Is it OK to use yield/0
with the new query API? I think I can use it to implement some async stuff on the JS side.
I added an API to grab the subquery's yielded
flag and made this:
https://php.energy/trealla.html?src=%3A-+use_module%28library%28format%29%29.%0A%0Afetch%28URL%2C+Content%29+%3A-%0A++phrase%28format_%28%22return+fetch%28%7Ew%29.then%28x+%3D%3E+x.json%28%29%29%3B%22%2C+%5BURL%5D%29%2C+JS%29%2C%0A++js_eval_json%28JS%2C+Content%29.&q=fetch%28%22https%3A%2F%2Fhttpbin.org%2Fget%22%2C+X%29.&v=0
Kinda can't believe it works! It was way easier than I expected to get async to work.
It's a testament to the quality of this code base, thanks a bunch!
More here: https://github.com/guregu/trealla-js/issues/17#issuecomment-1264653264
I was going to say you're guess is as good as mine here, but it seems to work.
On Mon, 3 Oct 2022, 04:37 guregu, @.***> wrote:
I added an API to grab the subquery's yielded flag and made this:
https://php.energy/trealla.html?src=%3A-+use_module%28library%28format%29%29.%0A%0Afetch%28URL%2C+Content%29+%3A-%0A++phrase%28format_%28%22return+fetch%28%7Ew%29.then%28x+%3D%3E+x.json%28%29%29%3B%22%2C+%5BURL%5D%29%2C+JS%29%2C%0A++js_eval_json%28JS%2C+Content%29.&q=fetch%28%22https%3A%2F%2Fhttpbin.org%2Fget%22%2C+X%29.&v=0 Kinda can't believe it works! It was way easier than I expected to get async to work. It's a testament to the quality of this code base, thanks a bunch!
More here: guregu/trealla-js#17 (comment) https://github.com/guregu/trealla-js/issues/17#issuecomment-1264653264
— Reply to this email directly, view it on GitHub https://github.com/trealla-prolog/trealla/issues/46#issuecomment-1264706887, or unsubscribe https://github.com/notifications/unsubscribe-auth/AFNKSEQFG73Z7FZRSIK2V7TWBHI7BANCNFSM6AAAAAAQWNSJMY . You are receiving this because you were mentioned.Message ID: @.***>
I didn't return a query handle as I didn't think it would be needed to have multiple queries outstanding. Maybe an enhancement later. Also, they wouldn't be thread-safe. You can always use multiple prolog objects if you need that.
On Tue, Sep 27, 2022 at 3:41 PM guregu @.***> wrote:
I'm working on the last major feature for the WASM ports: returning one query result at a time instead of findall'ing the query. Ideally we could just use stdin and wait for some input to continue the query, but unfortunately the majority of WASM runtimes don't implement stdin as a stream, so this approach doesn't work. I was considering doing something clever with history_getch as an exported host function but I think that'll just lead it to block forever in the browser.
Is there a way to control the redo process for queries with the current API?
I am imagining an API kind of like this:
// works like pl_eval, except returns a handle to the query instead of waiting for toplevel input query pl_query(prolog, const char expr);// redo a given query, returning false if there are no more choice pointsbool pl_redo(prolog, *query);
And the WASM host would call it like so:
class Prolog { ptr; // *prolog ptr
// example: javascript generator function that returns one result at a tinme *query(expr) { // eval once and grab ptr to query handle const query = this.pl_query(this.ptr, expr); do { const answer = /* read from stdin */; yield answer; } while (this.pl_redo(this.ptr, query)) }}
// usageconst query = pl.query("member(X, [1,2,3]).");for (const answer of query) { console.log(answer) // X=1, then X=2 next iteration, etc}
Ideally these would be silent, not dumping vars or printing prompts so it's easy to parse the output. We can use Prolog code to marshal the answers to an easy-to-parse format, so we don't need a C results API, just a way to run redos.
— Reply to this email directly, view it on GitHub https://github.com/trealla-prolog/trealla/issues/46, or unsubscribe https://github.com/notifications/unsubscribe-auth/AFNKSEU7GIBYFRJCWUXNMOTWAKCJLANCNFSM6AAAAAAQWNSJMY . You are receiving this because you are subscribed to this thread.Message ID: @.***>
Ok, updated the API.
On Wed, Sep 28, 2022 at 7:16 AM guregu @.***> wrote:
Yes, please. That would be great. Feel free to tweak the API however you'd like, I can adapt the code from the current WASM stuff easily to it. Query handle can be anything as long as WASM can pass it (any pointer, or int32/64).
— Reply to this email directly, view it on GitHub https://github.com/trealla-prolog/trealla/issues/46#issuecomment-1260063307, or unsubscribe https://github.com/notifications/unsubscribe-auth/AFNKSEX4W5DRVQAEVX5VRLDWANP3RANCNFSM6AAAAAAQWNSJMY . You are receiving this because you commented.Message ID: @.***>
I'm working on the last major feature for the WASM ports: returning one query result at a time instead of findall'ing the query. Ideally we could just use stdin and wait for some input to continue the query, but unfortunately the majority of WASM runtimes don't implement stdin as a stream, so this approach doesn't work. I was considering doing something clever with
history_getch
as an exported host function but I think that'll just lead it to block forever in the browser.Is there a way to control the redo process for queries with the current API?
I am imagining an API kind of like this:
And the WASM host would call it like so:
Ideally these would be silent, not dumping vars or printing prompts so it's easy to parse the output. We can use Prolog code to marshal the answers to an easy-to-parse format, so we don't need a C results API, just a way to run redos.