Closed TimWhiting closed 10 months ago
Alternatives,
1. Named effects <-> row effects conversion
named effect eq<a>
fun eq(v1: a, v2: a): bool
fun elem(s: set<s>, e: s): eq<s> bool
match s
Nil -> False
// Require explicit passing to recursive calls?
Cons(x, xs) -> if eq(x, e) then True else elem(xs, elem, eq)
fun findMatches(a: set<a>, b: set<b>, e1: a, e2: b): <eq<a>@aeq,eq<b>@beq> bool
a.any(fn(ae) aeq(ae, e1)) && b.any(fn(be) beq(be, e2))
val eqInt = handler ...
val eqList = handler ...
fun main()
findMatches<aeq:eqInt,beq:eqList>([1,2,3], [[1],[2].[3], 1, [1])
This has the advantage of being able to propagate along the effect type rather than being explicitly passed down as values. Additionally some top level handlers could be designated as implicit or default, and would be supplied after inference determines they are necessary at certain points. - When unification fails due to a named effect, it could start this sort of resolution process. In contrast to Haskell's typeclasses, these would allow overriding a typeclass at a particular call site. Additionally in contrast to Haskell's typeclasses, they don't follow the type, so any use of overriding could end up with discrepancies, it is unclear if it could be useful to allow discrepancies, but I assume not. The existential default methods approach has a similar issue with allowing explicit instantiation that don't follow types. An analysis or linter could verify that discrepancies are handled the same everywhere the value could flow.
2. It is unclear if it preferable for objects such as sets to include a top level field in their datastructure that just contains the needed methods -- more similar to OCaml's explicit nature. A more subtle approach could be as follows:
struct set<a>
l: list<a>
eq: infer (a, a) -> bool
Where the set could be explicitly instantiated with a separate equals method. But by default it searches for a method called eq defined for this type (as in existential default methods).
Additionally, when destructuring the set, it would be interesting to add something tied to the type variable, such that any method destructuring the set's inner list implicitly has access to eq as well.
This feels somewhat like codata? Really it would be ideal for a datastructure to include some sort of data and codata --> here the structure of the list is the data of the set, but the eq method is codata for working with the datastructure. I don't know that there is much research about mixing data / codata. Essentially this idea is that data is what you construct / destruct and the codata must be implicitly available when constructing a new instance, and is added to the implicit context when destructuring the data.
Note that I just found out that Scala does something very similar for their approach to overloading / typeclasses:
Type classes as objects and implicits: https://dl.acm.org/doi/abs/10.1145/1932682.1869489
Here is some prior related work:
Implicit Parameters: Dynamic Scoping with Static Types: https://dl.acm.org/doi/abs/10.1145/325694.325708
The major difference between Implicit Parameters and Scala's approach, is that Implicit Parameters assumes the name internal to the function for the implicit parameter is the same as the one it propagates outside the function. Scala's requires naming the implicit parameter, and finds it at the call site based on the type instead of name.
Mine is somewhere between the two. Mine involves naming in the function signature both the name used internal to the function and the name to look up a conforming variable of that type in the calling context. This avoids the issue in section 5.3 in the implicit parameters paper by allowing you to use two different names for the overloaded function, internally and using Koka's existing overloading machinery to choose the right (type correct) implementations at the call site. Additionally, I would discharge/apply/resolve the implicit rather than propagating the implicit constraint upwards. We already have effect handlers that propagate upwards, and if you do want it resolved higher using implicits you can always provide an explicit implicit parameter to your function, and pass it along.
The main difference from Scala is that in the above explanation I do not require objects / specified trait interfaces. Also the paper mentions a lot about OO, though the core idea has nothing to do with OO. The Implicit Parameters paper is similar as well. In contrast with Scala, they infer implicits for top level signatures. (Not sure what Scala does, do all top level functions require types?, probably attempts to discharge/resolve the implicit at that point rather than propagating it up). Similar to Implicit Parameters, my approach relies on the naming of functions (as opposed to Scala's naming of a trait object type). This means that you can have two implicit parameters with the same type but with different names. This makes it more general than just functions associated with a type. In contrast with the Implicit Parameters approach I would apply implicits as soon as I can discharge them, instead of propagating them upwards. So usage of an implicit that does `(==)` on two lists would discharge that implicit using the list equality (defined in core hopefully), which in turn relies on an implicit `(==)` that works on the generic types of the list. If that type is a bound type variable (generic) then we propagate the implicit, if the type is concrete though we attempt to discharge the implicit, and report an error if no implicit is found. In general I think Scala's approach seems to do well, and probably just needs a bit of tweaking to work well with Koka. A few more comments. Section 5.3 of the Implicit Parameters papers mentions an issue that their approach doesn't work at multiple types. This is not a problem in my approach due to the fact that I don't plan on inferring implicits in function signatures. Implicits are explicit in function signatures, and resolved at call sites based on the call sites' scope of variables. However, inferring implicits could be within the realm of possibility still, (I have some ideas).
Some background:
I've thought a lot about how koka could introduce ad-hoc polymorphism. One approach is to relax some constraints around algebraic effects such that the labels include type parameters in their comparison for the swap rule. That would necessitate a change to allow a merge rule and expand rule which would act on rows like this: <eff,eff> -> <eff> <eff> -> <eff,eff> However this undermines the ability to have duplicate labels actually mean something, which causes issues with masking etc. (Maybe this could be overcome with explicit copy / merge operators, but in general that might undermine some reasoning).
The two approaches in mainstream functional languages for ad-hoc polymorphism that I know of include Haskell's typeclasses and OCaml's modules, there are others I am sure.
Both of those work by defining an interface to start out. Then Haskell allows you to restrict methods to existential types that have instances of that interface, while OCaml lets you instantiate the interface with the required existential methods / type.
This Idea
The main idea I am exploring here is not requiring an interface definition to start out. It allows for a much more extensible and modular ad-hoc polymorphism in contrast to typeclasses or modules.
Default parameters with an existential type parameter do not have to resolve to a method until at the point of unification. At the point of unification/instantiation, the default method is resolved in the current scope, and explicitly passed, with any default methods it requires resolved as well - which requires an eta expansion).