unisonweb / unison

A friendly programming language from the future
https://unison-lang.org
Other
5.79k stars 270 forks source link

Effect encapsulation + parametricity #852

Open pchiusano opened 5 years ago

pchiusano commented 5 years ago

Some old notes from talking with Jonathan Brachthäuser at ICFP 2018.

He pointed out that if you have a HOF that takes an effectful fn, f, as arg which is parametric in its effect, and then you have a handle block which handles some concrete effect (say Abort) and also calls f, then if f happens to use that concrete effect, it gets handled by nearest lexically scoped handler with a matching head. Which I would say is unsound.

Jonathan was doing work on a solution where abilities are represented with regular fn parameters, and a handle block introduces those parameters. As the abilities are now ordinary values, they can be used to uniquely identify the handler when unwinding the stack. My initial reaction was this seems ugly for the programmer but I wonder if it could just be a runtime technique.

Some references that he was kind enough to send:

My other half-baked thought was you could avoid the problem via the following restriction: when typechecking handle body, remove all ambient abilities from environment that don’t have a concrete head. This is probably too restrictive but it avoids the unsoundness.

pchiusano commented 5 years ago

Was reminded of this issue thanks to #822 though I think they are separate issues.

pchiusano commented 5 years ago

Also, hi @b-studios (found Jonathan's GH handle)

b-studios commented 5 years ago

Great that you follow up on this! In the meantime this paper has been written. It explains the problem very well and also presents a solution -- which operationally is the same as Scala Effekt's.

The current solution of Koka (and Frank as I understand) is to infer the Abort effect also for the function f. This way it is sound again to handle it... "it shows up in the type -- so it's okay". In a second step the languages offer a way to remove the effect again and hide it from the type. Important part of the story is that removing the effect has operational content. It will lead to skipping the next handler.

There are multiple restrictions to the language that are possible and solve the problem. Not having effect polymorphism being the most restrictive one. Using capability-passing style as an intermediate solution only solves one half of the problem. We started describing the problem in Section 5.4 of this paper draft.

pchiusano commented 5 years ago

That’s great, thanks very much for those references @b-studios. Effekt looks very cool btw. Will do some reading and post updates here.

Also feel free to come by the #contrib channel on our slack where contributors including core team talk about Unison implementation and design work, invite link is tiny.cc/unisonslack

dolio commented 4 years ago

With parameterized abilities, this actually makes the type system unsound:

ability C a where
  c : a

h : a -> Request {C a} r -> r
h x = cases
  {r} -> r
  {C.c -> k} -> handle k x with h x

co : a -> '{e} r ->{e} r
co x v = handle !v with h x

v : '{C b} b
v = 'C.c

coerce : a -> b
coerce x = handle co x v with cases
  {r} -> r
  {C.c -> _} -> bug "coerce"

> (coerce "hello") + 0
ArturGajowy commented 4 years ago

Hi! I'm a big fan of you folks, and your work! That's why I thought I'd ask if you're aware of this year's ICFP paper that seems relevant:

"Signature restriction for polymorphic algebraic effects"

The authors present a well known (?) interaction between algebraic effects and polymorphism, making the mixture unsafe. They also discuss previously known solutions, and propose a novel, and quite a simple one.

https://icfp20.sigplan.org/details/icfp-2020-papers/33/Signature-restriction-for-polymorphic-algebraic-effects

https://youtube.com/watch?t=7100s&v=aNLOi-1ixwM

Let me know if this was helpful! :) Can't wait to see Unison in production! ❤️

dolio commented 3 years ago

I'm going to add some notes regarding these papers for future reference.

The paper "Abstraction-Safe Effect Handlers via Tunneling" notes that this problem is essentially the problem with dynamic scoping. Their solution is essentially type classes/implicits. Oleg long ago remarked that GHC's implicit parameters were not really dynamic scoping, but in this case that is a good thing. :)

So, the solution is to statically resolve which handlers are used for which effects, but do it in a type-directed way so that there is less boilerplate. A handler cannot handle effects from a polymorphic effect, because it does not statically know to pass in the implicit information that would cause that to happen.

I think this is actually very nice from an implementation point of view. Right now the runtime keeps track of dynamically scoped references containing closures for the handler code. The references are associated to a canonical numbering of each ability. This alternate methodology doesn't really require any of that, because implicit arguments just become normal arguments in the intermediate representation. Since they are statically resolved, it might even be possible to optimize to the particular handler implementation. The only dynamic component necessary is for the continuations. For instance, if you do:

f e = handle !e with foo
f '(f '( ...))

You technically have two dynamic instances of the same static handler in play (I think). The way this works is that each entrance into a handle creates a new prompt to associate with the statically known handler code. So if you can specialize to the static code, you still pass around these dynamic prompts to handle this situation. But the prompts are just integer tags or similar for marking the stack.

It's not clear to me that anything of real value is lost in this sort of approach. Of course it rules out certain 'genuine' dynamic scoping uses, but I'm pretty on board with the critique that dynamic scoping is anti-modular, etc.

b-studios commented 3 years ago

Yes, lexical effect handling very nicely avoids this problem. You are probably also interested in this paper by Biernacki et al. and my own work on the Effekt language where we make heavy use of this

dolio commented 3 years ago

My understanding of the Koka paper is: it works like Frank but with explicit polymorphism.

The way this issue is solved there is that rows must always agree, up to scoped reordering. So you cannot write (still using Unison syntax):

f : (a ->{e} b) -> ...
f g = handle g ... with cases {E.e1 x y z -> k} -> ...

because the effect row where g is called has an E in it, which does not match the plain e. The rows are the scoped label type, and allow you to have multiple occurrences of an effect that do not permute past one another. They use this to have a manual subtype promotion, where you could write:

f : (a ->{e} b) -> ...
f g = handle inject (g ...) with cases {E.e1 x y z -> k} -> ...

Where the inject adds a vacuous E field to the row. This is analogous to the new stuff in Frank for fixing the dual to the soundness problem: their functions were sound, but you had to explicitly mention every effect used in any handler used in your implementation, basically. Frank is a little more flexible with the ideas about duplicating handlers and stuff, while Koka just allows skipping, and Frank only mentions row extensions, while Koka talks about rows with variables. But they are otherwise comparable.

I think this is effectively still static scoping of effect handlers. You cannot handle effects in something unless the ability explicitly occurs in a row, which means you would know to pass it a 'dictionary' in the type class alike approach. There are a couple disadvantages I can see, though.

  1. Koka's approach doesn't work nicely with curried functions. We've been trying to get rid of some unsatisfactory inferred types in #1185, but in Koka I think they are the only valid type. It makes inference very easy, but the inferred types aren't necessarily what you want (unless you encourage tupled functions).
  2. The manual fiddling with rows seems clunky. There's a long section in the Koka paper about how you can do multiple state threads using the scoped labels and inject. But that is basically de Bruijn indexing (i.e. not good for humans). Then they show how you can use named arguments instead with "resources," but it somehow becomes ill scoped instead. Given that this actually is statically resolved scoping (I think), it seems like the 'type class' like approach is nicer, since it is a known way of automatically deducing the inject uses. And for situations where multiple copies of an effect are in play, it seems like 'named dictionaries' would be a better choice than de Bruijn indexing.

The Koka paper has a lot of stuff on efficient implementation of all this, which is worth studying. But I think a different presentation of the functionality would be preferable.

aryairani commented 4 weeks ago

My failing test from today; uses base:

flatCount : (a -> [b]) -> [a] -> Nat
flatCount f as =
  go = cases
    [] -> emit 0
    a +: as ->
      emit (size (f a))
      go as
  at1 <| handle go as with streamSum 0

streamSum sum = cases
  { Stream.emit count -> continue } ->
    handle continue ()
    with streamSum (sum Nat.+ count)
  { x } -> (sum, x)

accumulateStrings strings = cases
  { Stream.emit s -> continue } ->
    handle continue ()
    with accumulateStrings (s List.+: strings)
  { x } -> (strings, x)

test> streamSum.test = verify do
  ensureEqual
    handle
      emit 1
      emit 2
      "Hello"
    with streamSum 0
    (3, "Hello")

test> flatCount.test = verify do
  ensureEqual
    (flatCount (x -> of x x) [1,2,3,4,5])
    15

---
test> flatCount.test2 = verify do
  ensureEqual
    handle
      flatCount
        (x -> let
          emit (Nat.toText x)
          of x x)
        [1,2,3,4,5]
    with accumulateStrings []
    (["1", "2", "3", "4", "5"], 15)

The first two tests pass; the last one is interrupted with

  dumpDataNoTag: bad closure: Foreign (Wrap ReferenceBuiltin "Text" "1")
  expected type: ReferenceBuiltin "Nat"