Closed atennapel closed 4 years ago
A common solution is to change the type of spines:
data Spine = SNil | SApp Spine ~Val Icit | SBoolRec Spine ~Val ~Val
When trying to solve a meta, we cannot proceed if the spine contains SBoolRec
.
In general, a spine is a sequence of eliminations and projections. If we only have functions, then applications are the only possible (projection) components. With dependent pairs we also get field projections, with inductive types we get eliminators. The main goal of structuring neutral values this way is to be able to immediately tell whether computation is stuck because of a meta or a bound variable.
Yes, that makes total sense and keeps the head simple.
I wanted to add induction over church-encoded data as a primitive. For example:
indUnit {P} punit x ~> x {P x} punit
For induction over the church-encoded unit type: x : {t : *} -> t -> t
.
What can happen here is that we cannot reduce induction (just like if
) because it's a variable. But what can also happen is that you eta-expand the variable of Unit type: \{t : *}. x {t}
. Now you also should not be allowed to reduce, but that means the head is a lambda, which should not be allowed.
Sorry for the rambling but it seems like you cannot add induction over church encoded data as a primitive construct. Do you have any thoughts on that?
I don't know any nice solution for induction over church-coded data. You'd need to implement something like "self-types" or Cedille but I'm not very fond of those, because as far as I am aware, their type checking is formally undecidable, and their metatheory is not very developed.
Ok thanks. Yes I tried implementing self types but both those and Cedille work in a Curry-style type system, which I found hard to implement elaboration for (in the style of this repo). I think I should just give up on it and implement something like W-types, which means adding unit, boolean, Sigmas, W-types and induction constructs for all of those. I was hoping lambda-encoded data would mean a simple way to have datatypes, but the lack of induction is a deal-breaker. Dependent types without induction seem pretty useless.
Yeah, ergonomic implementation of inductive types is not so simple. W-type is very expressive, but IMO it only works as a reasonable generic representation if we have function extensionality. E.g., if we have Nat defined using W, then there are infinitely many definitionally different representations of "zero", and we can only prove them to be propositionally equal if we have funext.
IMO it's reasonable to bite the bullet, and introduce inductive type declarations in a distinguished top scope the same way as Coq and Agda does.
Yes, that is what I was afraid of. It looks like inductive type declarations will be easier to figure out than trying to continue with lambda encodings or W-types. Thanks for the advice!
I'm wondering how to extend data Spine
with a binary operation on booleans, say SOr
. The problem is that SOr
requires both operands to be boolean values to reduce further, or at least one of the operands must be a neutral to form a neutral expression. How do we code this invariant as an algebraic data type?
The naive solution is just to code three variants of SOr
: a neutral expression and a value, a value and a neutral expression, and two neutral expressions, but it doesn't seem to be an elegant approach to me. Moreover, it doesn't scale well to operations of higher arity.
How about:
data Spine = SNil | SOr Spine Val -- assuming a typed language then we know this Val is either true, false or neutral
vor : Val -> Val -> Val
vor VTrue _ = VTrue
vor VFalse v = v
vor (VNe hd sp) v = VNe hd (SOr sp v)
You can also add the cases:
vor _ VTrue = VTrue
vor v VFalse = v
Ok, with SOr
, it's possible to avoid all the combinations of neutrals-values in the spine constructor due to the semantics of logical OR. I was just thinking about the general case when each value of a sequence of values can be neutral but not all at the same time. As an example, consider SAdd
, the addition operation on natural numbers.
I feel I might not be understanding you. Natural number addition can be done similarly to boolean or:
data Spine = SNil | SAdd Spine Val
vadd : Val -> Val -> Val
vadd VZ v = v
vadd (VS n) v = VS (vadd n v)
vadd (VNe hd sp) v = VNe hd (SAdd sp v)
And you can add:
vadd v VZ = v
vadd v (VS n) = VS (vadd v n)
What do you mean with "each value of a sequence of values can be neutral but not all at the same time"? The arguments to add can all be neutral or values or combinations thereof.
Consider how to implement SAdd
not in terms of Peano arithmetic but using metalanguage's numbers. You would have three spine constructors: SAdd Spine Int
, SAdd Int Spine
, and SAdd Spine Spine
, since Haskell's +
requires both operands to be Haskell numbers.
UPD: I meant "at least one operand must be a neutral to block a computation and form Spine
, but if all operands are literals at the same time, the computation may proceed". Sorry for the vagueness.
If we only have primitive eliminators then all computation can block on at most a single neutral value.
With operations which can block on multiple operands at the same time, indeed we don't get linear spines. One option is to linearize this by choosing the order of scrutinizing arguments. Here we'd have SAddL Spine Val
and SAddR Int Spine
:
add :: Val -> Val -> Val
add x y = case x of
VInt x -> case y of
VInt y -> VInt (x + y)
VNe hd sp -> VNe hd (SAddR x sp)
VNe hd sp -> VNe hd (SAddL sp y)
Alternatively, we can make the representation a bit looser by putting Add
in the heads, e.g. by adding an Add Val Val
to flexible heads with the invariant that either of the values must be flexible, and another Add Val Val
to rigid heads with the invariant that either of the values must be rigid. This is a bit uglier because we have to keep the invariants in mind. But it's fairly convenient and flexible, and we don't necessarily have to linearize. This is what I generally use in my setoid type theory implementation.
Indeed, stopping scrutinizing on the first spine works and scales better; e.g., if there are three operands, the spine data constructors might look like this:
int int spine
int spine val
spine val val
In the case of four operands:
int int int spine
int int spine val
int spine val val
spine val val val
I'm not aware of "flexible" and "rigid" heads, so can't comment, but thanks for pointing this out.
The flexible and rigid heads can be seen here: https://github.com/AndrasKovacs/elaboration-zoo/blob/4ecbe7337c55b8882e101f37028fed4b43583aa8/03-holes/Main.hs#L287 (in this code the head of a VFlex
is always a MetaVar
and the head of a VRigid
is always a Lvl
)
How would you add booleans in the semantic domain? Let's say we'd like to add:
Now the
if
construct cannot always be reduced, so it could be the head of an application in the domain. That means that the head can no longer be easily checked for equality but it has to be unified. For example:\b. (if b then id else id) False
Evaluating this expression will have the(if b then id else id)
as the head of an application in the domain. Would this be the way to do this or is there an easier way to implement eliminator such as these?