pre-srfi / static-scheme

11 stars 0 forks source link

Algebraic effects #19

Open mnieper opened 3 years ago

mnieper commented 3 years ago

In #15, Lassi shared a compiled list of literature on algebraic effects in functional languages. We should investigate this further as this is another viable approach to separate pure from impure parts and gives a different solution to the raise and call/cc problems.

mnieper commented 3 years ago

Just to give a raw idea of the algebraic effects approach to procedure with side effects:

In this model, in Scheme-terms, every procedure with side effects (called an operation) doesn't perform its effect when called but raises a continuable condition. When this continuable continuation bubbles to the (dynamic) top-level, the effect is performed and the result value passed back to where the continuable condition was raised. So far, the observable behavior is the same as if the operation had performed its effect directly.

Examples for such operations are read or the mutation of some location, e.g. vector-set!. As with every condition, these conditions can be intercepted by exception handlers. The exception handler may reraise the condition (to have the effect handled further up) or not, may raise other conditions, and may perform pre- or post-actions, etc. In particular, operations can be dynamically interpreted.

The type system can then collect for each piece of code the set of operations this piece of code is (potentially) calling. A handler (exception handler in the above model) then modifies this set (because it may cause other conditions to be raised while some are completely caught; think of Java's throws annotation but with inference). A pure procedure is then one in which all operations are closed over by handlers and does not raise any conditions.

This is a powerful model and has a lot of use cases. For example, a test framework could take an (impure) procedure doing I/O and wrap it into a handler that turns the impure I/O into purely function testing code. The resulting function can then be evaluated in a pure context. Another use case are algorithms that need locally mutable vectors data structures. By confining the impure parts through handlers, the API of the algorithm itself can be pure.

mnieper commented 3 years ago

I am implementing a minimal Scheme-like prototype that implements algebraic effect handlers in the above sense. We can then take this core to experiment and use it as a basis to add type inference and inference of algebraic effects and see what is possible and what's not. When this is done, we can add whatever syntactic sugar we want.

The prototype I basically consists of a macro:

(compute <program>)

This macro interprets <program> as a program in the prototype language, compiles it, and then runs the computation defined by it.

eraserhd commented 3 years ago

Why raise continuations instead of returning objects representing the operations (which may contain continuations in its slots)? This seems just as easy to “interpret” programmatically.

On Mon, Nov 9, 2020 at 5:52 AM Marc Nieper-Wißkirchen < notifications@github.com> wrote:

I am implementing a minimal Scheme-like prototype that implements algebraic effect handlers in the above sense. We can then take this core to experiment and use it as a basis to add type inference and inference of algebraic effects and see what is possible and what's not. When this is done, we can add whatever syntactic sugar we want.

The prototype I basically consists of a macro:

(compute )

This macro interprets as a program in the prototype language, compiles it, and then runs the computation defined by it.

— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/pre-srfi/static-scheme/issues/19#issuecomment-723935297, or unsubscribe https://github.com/notifications/unsubscribe-auth/AABD6VGSMI4GMXNFEESYRATSO7CXRANCNFSM4TNTATFA .

mnieper commented 3 years ago

I used the condition system of Scheme to illustrate the concept in concepts that may be better known in the Scheme world. It is important to use raise-continuable and not raise so that the computation can proceed after the operation has been handled.

In my implementation, I don't use the exception system. In fact (I swepted this under the table in my rough explanation above), algebraic effect handlers can also model delimited continuations, which the exception system cannot.

mnieper commented 3 years ago

I've added my (yet unpolished) prototype to the effects/ directory. For a change, it is written in R6RS but trivially adaptable to R7RS when syntax-case is available.

The library (steme compute) exports the macro compute I talked about above. I'll document the implementated language soon. Until then, an example can be found in example.sps.

lassik commented 3 years ago

There's a research language called Koka in the effects bibliography and it looks very promising:

https://koka-lang.github.io/koka/doc/kokaspec.html#sec-effect-types (emphasis mine):

A novel part about Koka is that it automatically infers all the side effects that occur in a function. The absence of any effect is denoted as total (or <>) and corresponds to pure mathematical functions. If a function can raise an exception the effect is exn, and if a function may not terminate the effect is div (for divergence). The combination of exn and div is pure and corresponds directly to Haskell's notion of purity. Non- deterministic functions get the ndet effect. The ‘worst’ effect is io and means that a program can raise exceptions, not terminate, be non- deterministic, read and write to the heap, and do any input/output operations.

mnieper commented 3 years ago

Already mentioned in #20. :grinning:

lassik commented 3 years ago

OK, you got there first :D

Reading some of the code in https://github.com/koka-lang/koka and the manual, it actually seems to deliver on its promise to a surprising degree. The code is very readable.

Looks like type classes are carried over from Haskell, but not laziness.

It would be instructive if they have a full list of the situations where type annotations are necessary.

lassik commented 3 years ago

I tried to translate some Koka code to a Scheme-like syntax and it works out well, modulo the few type annotations (most of which are probably optional).

This language is one more data point that Scheme syntax with a good match and match-lambda + named constructors and values for tuples is a decent analog for FP code.

mnieper commented 3 years ago

Koka is certainly an interesting advancement of SML. But so are Multicore OCaml and Eff, which also go in the same direction. We have to distill the best of every proposal.

Differences lie in the detail and this is where our work begins. For example, just to given an example, are effects polymorphic or just parametric in types, or do we allow both?

Type systems are quite complicated (think of ML's edge cases, for example, like value restriction or let- vs lambda-bound polymorphism), so before we finally implement the final thing, we have to write down our specification and prove that it does what we want and is sound.

Compared to this, questions about syntax are mostly trivial.

lassik commented 3 years ago

Syntax is important as the main test of usability: as with GC, we can do almost anything in the "engine room" of the type system as long as the user can continue to write code that looks quite like Scheme. It's very reassuring to know that these quite different type systems add up to almost the same surface syntax.

mnieper commented 3 years ago

Don't get me wrong. Getting the final surface syntax right is important in the end. (I neither like Haskell nor ML in this regard, for example.) But this is not the most important, nor the most fundamental step in language design.

lassik commented 3 years ago

Agreed - it's more of a constraint than a step. I also find Scheme far superior to any of the FP syntax; once you start writing FP-like code in Scheme-like syntax, it immediately clarifies thinking.

johnwcowan commented 3 years ago

For me, things are quite otherwise: I think in words and translate them to notation on the way out. This is probably why I am not a mathematician. So when someone shows me code, I always ask: "What does it do and what is it for?" thus forcing an answer in words.

mnieper commented 3 years ago

I would say that we basically all think in words (at least I do as well). But I also do think very visually when contemplating abstract problems. And for this, (good) notation really helps because this is something that can take shape in my mind besides the word.

(However, I may have misunderstood your point.)

lassik commented 3 years ago

Like Marc, I find that a good notation (such as Lisp/Scheme) makes some seemingly tricky problems almost solve themselves. I also think both visually/spatially and verbally. The majority of my "thinking" is intuitive so it's really more of a feeling than a concrete representation.

lassik commented 3 years ago

Another new language: Scopes

Scopes is a retargetable programming language & infrastructure.

Compile-time resource management using view propagation, an annotation-free variation of borrow checking.