Open damons opened 5 years ago
First: Any help would be greatly appreciated!
Yes, the system definitely isn't secure and you mustn't expose it to the world. For something secure, you should use SWISH instead. At one point, Web Prolog should make use of library(sandbox), which will provide security.
OK. It's on the roadmap then. I'll look into creating a patch for library(sandbox). Maybe I can come up with something. :)
Is it reasonable to assume that the SWISH use of library(sandbox) would be a good reference when starting this effort?
I'm a little worried as I've hosted SWISH on publicly available servers, and in at least one case when Pengines was also installed, the host was almost immediately p0wned. I'm guessing I was using a dev version of swi, too. It wasn't an instance I cared about so I didn't verify.
I'm curious to know other's assessments of Pengines and SWISH's security profile. SWISH seems mostly safe, but I know there hasn't been enough eyeballs on it to get a good picture of its safety.
Unfortunately, I'm not so sure looking at SWISH will help much here.
It's been a while since I touched the code, but I think isolation.pl
and distribution.pl
are the files to look at first. I noticed that I pass sandboxed(false)
on a couple of calls to pengine_spawn/3
and that's one clue. If I remember correctly, I did that simply because I couldn't get it to work if I passed sandboxed(true)
. I had to give up at some point. Jan may be able to say more, but I know he's very busy with lots of other nice stuff.
I think SWISH is secure. The sandbox code is fairly mature, and a number of projects (cplint and LPS) are exposing SWISH and projects based on SWISH to the Web. But Jan can probably say more.
This is good to know. My SWISH installs mostly survive public access albeit I'm wary. I'll review isolation and distribution.pl. Thanks for the pointers Helpful.
Also, notice this on startup, but I haven't looked into it yet:
foo@m1:~/swi-web-prolog/web-client$ swipl run.pl
Warning: /home/foo/swi-web-prolog/distribution.pl:98:
Warning: Singleton variables: [Data]
Warning: /usr/local/lib/swipl/library/pengines.pl:2921:
Warning: Redefined static procedure '#file'/2
Warning: Previously defined at /home/foo/swi-web-prolog/isolation.pl:181
Just my $0.001. library(sandbox)
allows for loading code and statically checking that code cannot do any harm. It must be used together with in_temporary_module/3
and the assume the code is executed by a Prolog thread/engine. Given that, the library should ensure you cannot make any permanent changes to the Prolog system or resources on the machine it is running (notably the file system). It also ensures you cannot read anything from the file system or get information about other modules on the Prolog system or other isolated identities (even not about their existence).
That is the good news. The bad news is that the meta interpretation to establish that the code can only reach safe primitives is complicated. Various people have found bugs and I doubt all have been found. I always advice to run the system in a proper OS sandbox (jail, whatever it is called on your favorite OS) when exposed to the public internet.
The library allows anyone who can control the server to define extensions as safe and provide controlled interfaces to certain unsafe actions. This functionality is poorly understood by such users, often compromising the security of extended servers. In part this seems a general pitfall of security measures: only a few specialist really understand them while many people need to control them without willing to invest time in how it works (e.g., chmod -R 777 .
)
An alternative route would be to misuse SWI-Prolog's module system to create a more restrictive environment. Most likely that is feasible, although it would probably require some runtime protection, notably on the use of module qualification. It has the advantage not to restrict the code to code that is sufficiently tractable for the code analysis. It might be easier to understand, both to the user and system maintainer.
Will need fuzz testing from the beginning, I'd say.
@JanWielemaker @torbjornlager Would it be reasonable to say that maybe it's better to have unlimited access to the host by way of the parent Prolog process in this case?
For example, maybe having a Web Prolog instance with root access on a first class OS host (i.e., Linux, Windows, Mac) is a key feature rather than a constraint? Today, the host OS determines the restrictions to enforce on applications. As I see things, Web Prolog, as an agreed upon dialect for Web agents, can be hosted in different profiles.
For example, If there's a Web Prolog Actor instance running inside of a WASM Prolog in-browser Firefox.app running process, the host environment inherits what I call the Wisdom of the Web as WebAssembly dictates the security profile.
Would it make sense to simply assume that any Web Prolog profile will inherit the profile of the host environment? This places the burden of developing Web security features and enforcing security on the developers of Web browsers--the best place for it because that's what they do.
One way is to use implement web Prolog as processes, i.e., start it as a main server and on connections you fork. I use that for R combined with SWISH using RServe. It works reasonable if you can put in inside a really strong OS jail. It is relatively slow (~few ms while a thread based approach is more like ~10 usec. More seriously is how to actually share data? The thread/sandbox approach allows you to create a channel through with to communicate about shared data. One solution is to use an external database for this. That is fine for simple data, but as we go into more powerful knowledge representation and reasoning it is not so easy anymore.
I'm not sure what the main application area for Web Prolog will be. We use SWISH as a shared deductive database (on a semi-private network) in various projects.
Web Prolog in a browser talking to external servers will be cool.
New ideas about security are certainly welcome - I guess it's likely that WASM could eventually provide the best approach. For the proof-of-concept demonstrator, something less sophisticated would probably do for the time being. I don't expect or even want people to use Web Prolog in production just yet. I'd like people to comment and eventually agree on the design, and a proof-of-concept demonstrator/tutorial that can be used online would be a great way to elicit comments and constructive criticism.
A version standardised and ready for use in production in 2022 - the year when Prolog celebrates its 50th birthday - seems just about right to me. From a PR point of view, this would be a good move. Inspired by the following entry to HN I think of Web Prolog as a rebranding of Prolog:
https://news.ycombinator.com/item?id=3900047
Advertising Web Prolog as a special-purpose web logic programming language - the first of its kind - can be seen part of such a rebranding effort. Here's what I write in my manuscript:
"Among Prolog systems, SWI-Prolog in particular has all the right tools for building server-side support for web applications, in the form of mature libraries for protocols such as HTTP and WebSocket, and excellent support for formats such as HTML, XML and JSON. Therefore, it might be argued that since Prolog can and has been used for building server-side support for web applications, it should already be counted as a web programming language. But since this is true also for Python, or Ruby, or Java, or just about any other general-purpose programming language in existence, it would make the notion of a web programming language more or less empty. We could of course argue that SWI-Prolog is much better at providing what is needed, but we shall go further than that. We believe Prolog should claim an even more prominent position in this space. Prolog should be used not only for building server-side support for web applications, but should be made a part of the Web, in much the same way as HTML, CSS, JavaScript, RDF and OWL are parts of the Web. In other words, we want Web Prolog to be seen as a web technology in its own right."
I'm not sure what the main application area for Web Prolog will be. We use SWISH as a shared deductive database (on a semi-private network) in various projects.
SWISH is great, and I'm proud that I was involved in making it happen. :-) Unfortunately, I'm less proud of the design of library(pengines)
. As a library for JavaScript-Prolog communication it works well enough to support SWISH, but the library also makes promises that it can't really live up to, in particular when it comes to concurrent and distributed programming. Web Prolog has a much better design, and Erlang-style concurrency and distribution provides a much better foundation on top of which pengines and non-det RPC are built.
As for main application areas, I'm personally (because of my background) focused on intelligent conversational agents (digital assistents, personal assistents, or whatever you want to call them - think of Siri, Alexa and Google Home, but also of non-player characters in games that you can talk to, or social robots such as Furhat[1]. (I mention Furhat here because I know the guys behind it - and I want one of these robots in my home!). I believe that an AI programming language such as Prolog, with its KR and reasoning capabilities and NLP capabilites, is the ideal language for programming them. What Web Prolog brings to the table is the easy-to-use concurrency and real-time interaction capabilities (the use of spawn, send and receive in various combinations) needed for building really sophisticated such agents - agents that can think and speak and listen at the same time.
As you know, I'm also very impressed by the semantic web stuff that you and the people around you have built. It allows us think of Web Prolog as a semantic web logic programming language.
So, interesting application areas for Web Prolog certainly exists, but Web Prolog is also about changing the perception of what Prolog is and can be used for - and that's where rebranding and other PR tricks comes in.
Web Prolog in a browser talking to external servers will be cool.
Yes!
WASM could eventually provide the best approach
I don't think so. WASM Prolog of course inherits the safety from the JavaScript engine. That only applies to browsers though. Wrapped in Node.js it is just as dangerous as native Prolog. In addition, it is 2-3 times slower, only 32-bits and is single threaded. WASM nodes in the user's browser are great. On the server side though, you want native nodes. It you use the thread design you can create nodes with access to huge data sets in Prolog that can be accessed in your Web Prolog network. In the process (fork) based approach this is much harder to realise.
I do agree about comments on old Pengines vs your current ideas. As is though, Pengines are good enough for an interesting class of applications, mainly a single server and multiple JavaScript clients. We also used Pengine's RPC to connect two servers with different datasets. Very comfortable!
To me, Web Prolog presents a Web distributed processing model enabled by non-monotonic logic and the most succinct DCG language feature to date (to name only two reasons, there are more).
Other distributed processing models in other languages, in my opinion, are not as succinctly expressive as Web Prolog in describing and implementing distributed Web processing. It takes the best of the best distributed language features of Erlang and boils them down into a language-agnostic text protocol. Erlang, along with others like Rust, have assumed the right perspective for distributed processing, and Web Prolog seems to have identified and isolated the syntax and semantics of distributed Web processing that any system can implement as a standard on the Web.
If we are to assume any form of standard for distributed Web processing, we also need to include standards efforts for autonomous agents and logic language representations on/of/in the Web, too.
Please tell me if you disagree: Prolog is a language more inclined towards AI, NLP, and theorem proving. Unification, something that is needed in AI languages does not exist, really, as a first-class language feature, in any of the major AI toolsets, because they are all written in Python and C++ today. If the future holds a world where we see distributed Web AI agents (which are at our doorstep!) we might also want to use powerful AI language features (i.e., unification, etc.) when programming those agent AIs using logic and DCGs to translate/communicate between systems?
This sounds exceptionally compelling to me for some reason, especially if those agents can be full-blown AI servers running in a building somewhere, or on someone's laptop, or a rasp pi, or an android device. Or.....
What really becomes interesting is when we divide the Web Prolog instances up between AI Server-farm-level instances of Web Prolog and In-Browser instances of Web Prolog running in the safe runtimes of WASM. Now we have an interesting distributed computing model for the Web.
@JanWielemaker is absolutely right when he expresses his concern and suggests running our servers in a jailed OS environment as (currently) there's no way to know one is safe from malicious modification of the OS resources. I actually think this is a feature rather than a bug. In order to truly have a distributed computing model, there will be two environments we are capable of describing in every case: Trusted environment or not?
A browser is a trust-able environment (at least to most people in the world). A colocated server is an untrust-able environment, meaning we cannot expect to be able to run any code locally that we receive from an external source, in a completely safe way. The only way to stay safe is to not provide any resource, at all, to modify system resources.
Yet, we know the browser is going to keep us safe, because, that's what those Mozilla hackers do for us (ideally!). Running Agents inside of WASM instances with code that's delivered to the browser by way of Web Prolog protocol standards means that whatever we do inside that browser frame looking at us isn't going to wipe out the host of the application.
I like the idea of having one of the profiles be a logical OS or jail that grants a limited profile of standard Prolog predicates that are known-mostly-safe but with the assumption that the host machine includes a full-blown POSIX environment (which is running swi-prolog or any other language that provides the text of the Web Prolog protocol standard) could be p0wned by a hacker at any moment, requiring a wiping of the instance, and rinse and repeat.
Its sibling profile would be one where the POSIX environment is provided which operates under the rules dictated by WebAssembly.
This, to me, provides a compelling distributed computing model for the Web. It means that we can begin building distributed agents that can move between in-browser and out-of-browser contexts that include well-defined and established security constraints.
Also, again, call me out if I'm crazy here, but this model of computing really does away with many of the concerns of the halting problem. Why? Because, using logical distributed computing agents that can be spawned off safely on countless distributed hosts without actually having to worry about cleaning up after one's self really re-defines what it means to do garbage collection and cleanup of spinning, dead, or halted processes. The division of hard-bound-to-server nodes of (i.e., Web Prolog serving applications on other AWS servers) and soft-binding-to-browser nodes that can be anywhere on the Internet and can come and go stochastically, well, that's a good way to divide up computing resources on a planetary scale if you ask me.
I would not expect microsecond performance in spawning new agents. I would expect Web Prolog colocated agents to utilize system resources as needed (i.e., processes or threads, whichever is more appropriate). However, I would also stress that threads (not processes) are concepts that do not port well to the Web world.
The first question that is asked whenever one wants to port a native application to the Web using WASM is: Does the code use threads? If so, it could be a showstopper for porting to WASM, depending on the details. Web Prolog just assumes Actors who can do as much as you can throw at them without getting an error or slow responses. It seems safe to assume that spawning is includes long-running set-up and tear-down efforts, over a network even.
This means one should assume WASM-level performance when spawning new processing efforts. This means multi-processing and not multi-threading (usually) characteristics.
On Mon, Jun 24, 2019 at 9:30 PM Damon Sicore notifications@github.com wrote:
To me, Web Prolog presents a Web distributed processing model enabled by non-monotonic logic and the most succinct DCG language feature to date (to name only two reasons, there are more).
Well, non-monotonic logic (through negation-as-failure) and DCG surely are great features, but they are Prolog features with a long history rather than features of Web Prolog in particular. Web Prolog inherits them from ISO Prolog, which is great! The important contribution that Web Prolog might make is, as you also mention, the concurrency and distributed processing model - which I shamelessly nicked from Erlang. :-)
As for DCG features, you may be thinking of the "open dicts" proposal that I made (also) here: https://github.com/SWI-Prolog/roadmap/issues/50. Open dicts are not really a feature specific to DCG though, although they mix with DCG in a useful way. Open dicts would be great to have in Web Prolog, but I'm not sure it will happen. Concurrency and distribution is my main concern.
Other distributed processing models in other languages, in my opinion, are not as succinctly expressive as Web Prolog in describing and implementing distributed Web processing. It takes the best of the best distributed language features of Erlang and boils them down into a language-agnostic text protocol. Erlang, along with others like Rust, have assumed the right perspective for distributed processing, and Web Prolog seems to have identified and isolated the syntax and semantics of distributed Web processing that any system can implement as a standard on the Web.
Sounds exactly right, but I'm of course biased. :-)
If we are to assume any form of standard for distributed Web processing, we also need to include standards efforts for autonomous agents and logic language representations on/of/in the Web, too.
Please tell me if you disagree: Prolog is a language more inclined towards AI, NLP, and theorem proving. Unification, something that is needed in AI languages yet does not exist, really, as a first-class language feature, in any of the major AI toolsets, because they are all written in Python and C++ today. If the future holds a world where we see distributed Web AI agents (which are at our doorstep!) we might also want to use powerful AI language features (i.e., unification, etc.) when programming those agent AIs using logic and DCGs to translate/communicate between systems?
I agree with most of this. Unfortunately though, when it comes to NLP, Prolog lost most of the lead it once had when the field turned to statistical methods, and (more recently) to neural networks and deep learning. Currently, Python, with it's NLTK package, probably is the most popular language among people doing NLP. NLTK implements unification (over records, AFAIK) but it's MUCH slower than it would be in Prolog. For serious NLP parsing and generation, I would use Grammatical Framework[1] which is written in Haskell. (I would call the GF parser/generator from Web Prolog of course, since Web Prolog (and Prolog in general) is a really great glue language.)
This sounds exceptionally compelling to me for some reason, especially if those agents can be full-blown AI servers running in a building somewhere, or on someone's laptop, or a rasp pi, or an android device. Or.....
What really becomes interesting is when we divide the Web Prolog instances up between AI Server-farm-level instances of Web Prolog and In-Browser instances of Web Prolog running in the safe runtimes of WASM. Now we have an interesting distributed computing model for the Web.
Yes!
@JanWielemaker is absolutely right when he expresses his concern and suggests running our servers in a jailed OS environment as (currently) there's no way to know one is safe from malicious modification of the OS resources. I actually think this is a feature rather than a bug. In order to truly have a distributed computing model, there will be two environments we are capable of describing in every case: Trusted environment or not?
A browser is a trust-able environment (at least to most people in the world). A colocated server is an untrust-able environment, meaning we cannot expect to be able to run any code locally that we receive from an external source, in a completely safe way. The only way to stay safe is to not provide any resource, at all, to modify system resources.
Yet, we know the browser is going to keep us safe, because, that's what those Mozilla hackers do for us (ideally!). Running Agents inside of WASM instances with code that's delivered to the browser by way of Web Prolog protocol standards means that whatever we do inside that browser frame looking at us isn't going to wipe out the host of the application.
I like the idea of having one of the profiles be a logical OS or jail that grants a limited profile of standard Prolog predicates that are known-mostly-safe but with the assumption that the host machine includes a full-blown POSIX environment (which is running swi-prolog or any other language that provides the text of the Web Prolog protocol standard) could be p0wned by a hacker at any moment, requiring a wiping of the instance, and rinse and repeat.
It's sibling profile would be one where the POSIX environment is provided which operates under the rules dictated by WebAssembly.
This, to me, provides a compelling distributed computing model for the Web. It means that we can begin building distributed agents that can move between in-browser and out-of-browser contexts that include well-defined and established security constraints.
WASM, security, trust, privacy etc. is something I'd happily leave to others to think about. Extremely important, of course, but I lack expertise in this area.
Also, again, call me out if I'm crazy here, but this model of computing really does away with many of the concerns of the halting problem. Why? Because, using logical distributed computing agents that can be spawned off safely on countless distributed hosts without actually having to worry about cleaning up after one's self really re-defines what it means to do garbage collection and cleanup of spinning, dead, or halted processes. The division of hard-bound-to-server nodes of (i.e., Web Prolog serving applications on other AWS servers) and soft-binding-to-browser nodes that can be anywhere on the Internet and can come and go stochastically, well, that's a good way to divide up computing resources on a planetary scale if you ask me.
I'm not so sure about the connection to the halting problem - that seems totally unrelated to me. But in the present and future world of multi-core computer hardware, where processes are really cheap to create, use and destroy (like in Erlang), the rest looks right. I really do believe that "wrapping" the Web in Prolog would work and would scale, which, when restricted to pure Prolog, would amount to a web of pure logic. Jan's recent work on tabling and Well-Founded Semantics provides another piece of the puzzle - [2] presents ideas for how this relates to the Semantic Web.
I would not expect microsecond performance in spawning new agents. I would expect Web Prolog colocated agents to utilize system resources as needed (i.e., processes or threads, whichever is more appropriate). However, I would also stress that threads (not processes) are concepts that do not port well to the Web world.
The first question that is asked whenever one wants to port a native application to the Web using WASM is: Does the code use threads? If so, it could be a showstopper for porting to WASM, depending on the details. Web Prolog just assumes Actors who can do as much as you can throw at them without getting an error or slow responses. It seems safe to assume that spawning is includes long-running set-up and tear-down efforts, over a network even.
This means one should assume WASM-level performance when spawning new processing efforts. This means multi-processing and not multi-threading (usually) characteristics.
Again, I leave this for others to think about. :-)
[1] https://www.grammaticalframework.org/ [2] https://www.researchgate.net/publication/220988276_Semantic_Web_Logic_Programming_Tools
Well, non-monotonic logic (through negation-as-failure) and DCG surely are great features, but they are Prolog features with a long history rather than features of Web Prolog in particular. Web Prolog inherits them from ISO Prolog, which is great!
Yes. :) And, in my experience, most programmers in the world have yet to truly understand why they are so important. They also don't understand they've been given a gift by all of the decades of hard work Prolog language developers, especially @JanWielemaker, ISO, and countless others, have put in to establishing a rich, stable, and still thriving community.
As for DCG features, you may be thinking of the "open dicts" proposal that I made (also) here: SWI-Prolog/roadmap#50. Open dicts are not really a feature specific to DCG though, although they mix with DCG in a useful way. Open dicts would be great to have in Web Prolog, but I'm not sure it will happen. Concurrency and distribution is my main concern.
I was mostly thinking of just DCGs because I had some suspicions about dicts not being ubiquitous yet. I'm no expert on this at all. Due to my experience building APIs and translating between different systems, I know that mapping one system to another is always complicated and platform specific. If the languages I was using to implement those APIs (almost always different between projects) had included DCGs or (some simple JSON-like equivalent standard) way back when, things could have been easier. Now that we have such tools, incorporating their wisdom to tie together distributed Web agents seems compelling. I may be totally off-base on this.
... I would use Grammatical Framework[1] which is written in Haskell. (I would call the GF parser/generator from Web Prolog of course, since Web Prolog (and Prolog in general) is a really great glue language.)
See. It connects Web agents. Excellent example. To continue this elaboration: Browsers can then create one agent per tab and spin up WASM environments as GF nodes, whose code is delivered by way of the Web Prolog protocol. :)
First: I love this project, and I hope to help in some way. :)
The default installation instructions grant access to dangerous system predicates.
Once started, the web console allows access to file system predicates on the server. This allows things like:
...and it's gone. Not to mention, there's access to that passwd file and certs. :)
Also, calling listing/0 will spew the output on the server's console, which is another issue that may deserve a separate bug report.