ta0kira / zeolite

Zeolite is a statically-typed, general-purpose programming language.
Apache License 2.0
18 stars 0 forks source link

Allow contravariant params to have `defines` and `requires` filters? #45

Closed ta0kira closed 3 years ago

ta0kira commented 4 years ago

Such param filters were originally allowed, but later I decided that there should be a "contains" relationship between filters before/after variance conversions. For example, if #x requires Y (#x→Y), then converting #x from A to B implies that A requires Y (A→Y) should contain B requires Y (B→Y), i.e., A→B→Y, implying A→B. The latter means that requires implies either covariance or invariance of #x.

The intuitive reasoning is that defines and requires imply reading something from the parameter, e.g., #x$equals(x,y) reads the equals function from #x. This results in using a contravariant type for reading when only writing should be allowed.

On the other hand, the parameter substitution is an immutable part of the object or function call, in the sense that any code using parameter #x uses #x as originally assigned; not as whatever it might be converted to. For example, any call to #x$equals(x,y) will use the "real" #x; not whatever #x might become via variance conversions.

So, I think the entire argument for allowing these filters hinges on those filters never being externally accessible after parameter substitution. In other words, if they are allowed, then at no future point can there be a language feature that allows retrieving a parameter from a value or type. (For example, something like using pattern-matching in C++ to extract a template parameter.)

ta0kira commented 4 years ago

Being a bit more concise:

Anyway, the arguments for allowing this are becoming more ambiguous while the arguments against it are very clear, so I'll just close this.

ta0kira commented 3 years ago

Here is an interesting use-case that currently is not allowed:

@value interface Downcast {
  // Define as: downcast () { return reduce<#self,#y>(self) }
  downcast<#y>
    #y requires #self
  () -> (#y)
}

This is disallowed because #self occurs here in a contravariant position. Note that this isn't quite the same as #self allows #y because #self is fixed in this context. But, the reasoning for disallowing it is the same: Allowing it would mean that an up-cast would allow things that weren't allowed previously.

For example, suppose we have BA, CA, BC, and CB. This means that if we start with #y requires B and cast to #y requires A, then we can now use C as #y when that wasn't possible before.


I guess the real question for this issue is this: Is there any possible way for a procedure definition to "learn" the apparent value of a category param?

concrete Type<|#x> {
  // Is there a way for call to know what the apparent #x is when called?
  @value call () -> ()
}

It seems like this would only be possible if call had its own param that depended on #x in a way that became more permissive if Type<#x> was up-cast.

ta0kira commented 3 years ago

(I thought in circles a bit about the "consistent variance" rule, and decided that it doesn't apply to filter conversions. The summary is that contravariant types would always violate the rule if that was the standard for filter conversions.)


Maybe we should start with a specific use-case:

Should #x defines LessThan<#x> be allowed for contravariant #x?

It's worth noting that this filter is already allowed for covariant #x. Is that useful?


Maybe the most important point is that contravariant params would have no use in concrete categories without requires or defines filters: Without those, you could only store values, without returning them or even reasoning about them.

Another important point is that allows filters are really only useful if you want to return a certain type in place of the specified param. (Even that's a bit of a stretch, but whatever.)

Lastly, we still need to enforce variance rules when category params occur in filters for function params.


So, I think I've changed my position on this issue: We don't need to consider variance when validating filters for category params. This is safe because category-param filters cannot reveal information outside of the category after initial construction. In other words, param types (including their meta-types implied by the filters) are locked in when the type instance or type value is constructed.


Another way to look at it is as the params being input variables to the type-instance constructor, and the filters as the argument meta-types. If the output is the type instance, the original meta-types required by the function that constructed it aren't recoverable; they just artificially limit the inputs.

I think it's analogous to this:

@value interface Base {}

concrete Foo {
  refines Base
}

concrete Bar {
  refines Base
}

// ...

@type build ([Foo|Bar]) -> (Base)
build (x) {
  // You can't recover the fact that only Foo and Bar were allowed as inputs.
  return x
}
ta0kira commented 3 years ago

One possible remaining inconsistency is this:

Suppose #x is contravariant, and #x defines Foo<#x>, where Foo has an invariant param. If the real #x is A, is it still safe to pass B as an input if BA? Yes, since the underlying object has A locked in, i.e., any B passed in will be converted to A automatically.

So, B defines Foo<B> is largely irrelevant except when calling a @type function using B in the substitution, i.e., locking B in as #x. This essentially means that filter checks for category params might not even be relevant outside of @type-function calls.


concrete Type<#x|> {
  #x defines Invariant<#x>

  call<#y>
    #y requires #x
  (#y) -> ()
}

@type interface Invariant<#x> {}

In this example, if we convert a Type<A> to a Type<B>, then there is an implication that #y defines Invariant<B>. On the other hand, this is never explicitly checked; we just have #yB(→Invariant<B>). Again, B defines Invariant<B> isn't relevant here unless the underlying value has #x=B.


I guess the distinction here is this:


This means that, in theory, you could safely have a type-instance that violates the filters for the category's params, as long as those params don't ever make it into the procedures for the category.

So, I think we really only need to check category-level param filters in these situations:

  1. @type function calls.
  2. Value initialization, e.g., Type<#x>{ foo }.
  3. Substituting category params into function params for function calls.

In other words, in only the situations where type-instances are used as type variables.