Closed nicopap closed 6 years ago
Hmm... I don't think that's a bug, though it could perhaps be documented more fully.
The presentation of type inference I'm most familiar with, from Pierce (2002), would also have some_type.apply(&ctx) == "bool -> t0"
for the example above. The version of context application discussed there doesn't recurse on the substituted values.
This issue is instead managed by composing contexts as they are formed. I don't have ready access to it right now, but I'd recommend taking a look at Pierce (2002) for more details on how that works.
I see two paths forward:
pub fn compose(&mut self, &other)
for Context
s and add documentation discussing this approach, perhaps with an example type inference algorithm.@lucasem - what do you think?
I agree with Josh that this isn't actually a bug — we should find a way to better explain usage in the docs. Assignments in the context don't propagate when using apply
, and they shouldn't need to.
This issue is instead managed by composing contexts as they are formed.
Essentially, you should keep types up to date with the context. So before you extend the context with a type, that type should be applied to the context:
// approach 0.5 — don't actually use this!
let mut ctx = Context::default();
ctx.extend(0, tp!(int));
let t = tp!(@arrow[tp!(bool), tp!(0)]);
ctx.extend(1, t.apply(&ctx));
tp!(1).apply(&ctx); // bool → int
However, it should be noted (and should be better documented) that you should only use ctx.extend
on type variables that have already been introduced to the context via ctx.new_variable
or instantiation of a TypeSchema
. This becomes more apparent as you try to do more complicated things with a context (like when you work with an existing one rather than creating a new one).
// approach 1
let mut ctx = Context::default();
let v0 = ctx.new_variable();
ctx.unify(&v0, &tp!(int)).unwrap();
let t = tp!(@arrow[tp!(bool), v0]);
let v1 = ctx.new_variable();
ctx.unify(&v1, &t.apply(&ctx)).unwrap();
let t = v1.apply(&ctx); // bool → int
Depending on where you're trying to use this, it might make more sense to take a different approach using polytypes (TypeSchema
s). The main distinction between these two approach is just what your intention is; the behavior is the same:
// approach 2
let ts = ptp!(0; @arrow[tp!(bool), tp!(0)]); // ∀t0. bool → t0
let mut ctx = Context::default();
let t = ts.instantiate(&mut ctx); // bool → t0
ctx.unify(t.returns().unwrap(), &tp!(int)); // same behavior as ctx.extend(0, tp!(int))
let t = t.apply(&ctx) // bool → int (make t up to date with our new context)
I hope this clears things up @nicopap! It sounds like we need to figure out how to better document this intended usage (which corresponds to the material in Pierce (2002)).
Thank you for the quick feedback and throughout explanation!
I noticed the bug while testing my implementation of Algorithm-J presented in the Hindley-Milner type system wikipedia article. l had the intuition that I was doing something wrong because it generates a lot of substitutions in the context. In insight, it seems I rely on type substitutions where I should rely in inner mutation.
I used polytype to implement type inference in a toy project. While testing my code, I quickly bumped into an unexpected behavior.
In short, when a
ctx : Context
holds multiple substitutions, the result ofsome_type.apply(&ctx)
might contain free variables that can still be substituted.For example, if
ctx.substitution()
is{ 0: int ; 1: bool -> t0 }
, andsome_type
ist1
, I getsome_type.apply(&ctx) == "bool -> t0"
while I expectedsome_type.apply(&ctx) == "bool -> int"
.A runnable code version:
Now I'm not sure if I'm misusing the library (either the function I'm describing is not
apply
or there is good reasons forapply
to behave that way). But in case this is a bug, I wanted to bring it up here.