Open jonsterling opened 8 years ago
By the way, to characterize the patterns on ABTs, we should just be able to copy a simplified version of the Δ;Ψ ||- p::A
judgment from the Noam/Dan/Bob focusing stuff, where types A
are just either valences or lists of valences.
The above restriction on types should suffice to stop everything at the second order, at which point a substitution θ :: Δ
should be admissible given a derivations of Δ; . ||- p::A
and M::A
, such that p θ == M
.
@jozefg Any thoughts?
I've started making some notes/plans here: https://github.com/jonsterling/sml-abt-patterns/blob/master/doc/doc.pdf
"admits the postsupposition \Psi' ctx" auto to have two primes, no? (In the para beginning "The context concatenation judgment")
@jonsterling FWIW, there may be some overlap here with some of the stuff I've been working on lately with explicit substitutions and abstract machines and that multicategory tech in general.
I started implementing a sequent calculus style abstract machine here to generalize the Krivine machine thing I had with a better story for delimited control, patterns, and more well-behaved explicit substitutions (there are some issues with termination in certain cases that require fairly ad hoc workaround in the original machine). I was hoping to build it up into something fairly robust so maybe somehow parts of it could be used here or could be ported/adapted to a JonPRL backend.
In general, some technical issues would still need to be worked out, but I think if you characterize your calculus in terms of containers or polynomials and well-founded relations, there should be a very nice way to explain patterns and substitutions at the level of an efficient sequent calculus style abstract machine like this. For a list multicategory style calculus the relation could be the reflexive transitive closure (Data.Star) version of <
over natural numbers. Also, if you generalize the pattern and substitution story enough in this direction, I think some of the opetopic type theory ideas start to become relevant…
This approach may be more complication than you want for the moment but perhaps it's something to consider further down the line.
@jonsterling OK, this looks like a much cleaner way to handle things! So we could handle the implementation in three passes
computeChart
doesn't compute this chart thing anymore, instead we replace it with something that walks the pattern and supplied term and checks that we actually have a valid pattern and simultaneously constructs the substitution. We could do this in two passes where one pass checks linearity and the next constructs the substitution. I don't have strong opinions on which is betterCDEEP
since that will ensure that the argument is normalized before we substitute it.Does that sound about right?
Small addendum, we actually need to merge 2/3 into one step, otherwise we get the arity failures we were avoiding before. That's as it was before though. I'm just learning how linear second order unification works now :)
Edit: I'm reading that linear second order unification requires that lambda abstractions (as well as free variables) occur linearly which is a little different than your doc, am I reading everything right? However, pattern matching (as opposed to full unification) is apparently decidable?
@thsutton Yes, you are quite right! Thanks for catching that!
@jozefg Yes, that is probably the case. I think that, at least in the pattern fragment, this is not really a big problem—since at least in our use-case, none of our patterns will actually have abstractions (they'll only have pattern variables). But I need to read up in linear 2nd order unification; I had found some Pientka & Pfenning paper on it early... What are you reading?
@freebroccolo Very neat... I'm very interested in this line of development.
@jonsterling I was reading Levy's (original?) paper on it but I'm going to look at the Pfenning and Pientka paper. Twelf and Beluga must do something clever about this after all..
This paper seems like just what we want (modulo weird stuff about lambda sigma), the core constraints are close to what we want I think.
Fwiw, I've got proper support for metavariables (what Nuprl calls "second-order variables") in the new ABT library as of tonight.
I have a feeling that this may let us simplify the problem here. In a rewrite rule, all the "variables" should be metavariables (not just the "second order" ones). Then, we might have the user assert a judgment like Θ :> ϑ(m1; ... mn) ---> ϑ'(E1; ... ; En)
, where Θ
is a metavariable context, mi
are the names of metavariables, and Ei
are bound terms (like {u...}[x...].M
).
Operator exists : ({}[].exp, {}[exp].exp)exp
A : {}[].exp, B : {}[exp].exp :> exists(A; B) ---> (x : A{}[]) * B{}[x]
Now, by putting the left-hand side in this abstract notation, where each argument to the operator is a metavariable name, we seem to pretty much bypass the need for any fancy unification machinery at all. It seems like it should be pretty much trivial to take a user-provided rule like the above one and turn it into a computational rewrite rule (just find the exists
operator, and begin constructing the rhs, applying metasubst
to each metavariable with the corresponding subterm of exists
).
@jozefg Do you think I'm missing anything, or will the above suffice?
@jozefg BTW, the other night I wrote up a little prototype lib for this, which I'll push this evening. I haven't tried running it yet, but I'm optimistic that it should work, given that the code is 100% non-gnarly.
The library is general enough, btw, that it will let us even express non-uniform rewrite rules, like the following:
𝖒 : 0, 𝖓 : 0, 𝖊 : 2 ▶ spread(pair(𝖒;𝖓); 𝖊) ⤇ 𝖊[𝖒[], 𝖓[]]
Here's the library: https://github.com/jonsterling/sml-abt-patterns
Nice! It would be good to have a simple and compelling story when it comes to patterns. Do you think it's worth trying to fold this into current jonprl or hold off until "The Big Overhaul" when we fix all of these things?
I really need to try to go throw and document all of this typed-abt stuff, still not quite sure I get how all the code fits together.
@jozefg I wish we could fold it in immediately, but we'd need to introduce support for the new ABT lib immediately (because of metavariables).
It might be worthwhile to actually implement Red JonPRL in 3-4 passes rather than all at once (I worry about following in the footsteps of Epigram, where they seem to have bit off far too much at once). The first pass could be to get the new ABT framework in place, without any deeper changes (so, it would still be unityped, but would just use the new library). This itself is a pretty big project.
My perspective is that we should not feel that any aspect of JonPRL is too precious to delete for the sake of expediency. That is, there's experimental things like the Containers modules that will probably behave like dead weight when we're trying to make big changes like this, and we should feel free to just delete them.
That's fair, especially since once the actual ABT framework is in place we can in theory make some of these changes concurrently but actually making the swap will have to be mostly atomic. I know you're pretty busy but do you have an idea for when you'd like to do this?
As far as dead code goes, that's definitely fair. One of the things that might be pretty trick to port as well is the match
tactic. On one hand, it's pretty slick but on the other it's a lot of hackery with the ABT library to make that work as expected. Especially since we do some weird stuff with variable binding it may turn out to be a problem. May have to put that in limbo during the initial swap.
match
tactic may be a bit tricky. I'd like to recover the entire tactic system from within the ABT framework, but I haven't thought too carefully about how that will work. In the meanwhile, I'm hoping that we can just do some stop-gap approach, analogous to what we have right now; in fact, the library is not so fundamentally different from the other that we could not (I think) just take what we already have almost verbatim. But the unification library will have to be ported.
A pattern like
x.y.F[x][y]
is not supported, but this is of course needed for operators whose subterms have valence> 1
.This code is super gnarly and was basically written "cowboy" style by me—we should probably replace it with a proper implementation of linear second order unification (which is decidable, and all we would need anyway in this case).