Closed ta0kira closed 3 years ago
Being a bit more concise:
defines
and requires
with contravariant params imply that the filter becomes more permissive after a variance conversion.As far as I know, the only such access currently is via filters on function params that refer to params of the category. (Not mentioned above.)
For example:
concrete Type<#x|> {
#x requires Base<#x>
@value func<#y>
#y requires #x
() -> ()
}
This implies #y requires Base<#x>
, but this is the apparent #x
; not the "real" #x
. Even though #y requires #x
is separately disallowed (maybe?), that check is not meant to protect the #x requires Base<#x>
filter.
Anyway, the arguments for allowing this are becoming more ambiguous while the arguments against it are very clear, so I'll just close this.
Here is an interesting use-case that currently is not allowed:
@value interface Downcast {
// Define as: downcast () { return reduce<#self,#y>(self) }
downcast<#y>
#y requires #self
() -> (#y)
}
This is disallowed because #self
occurs here in a contravariant position. Note that this isn't quite the same as #self allows #y
because #self
is fixed in this context. But, the reasoning for disallowing it is the same: Allowing it would mean that an up-cast would allow things that weren't allowed previously.
For example, suppose we have B
→A
, C
→A
, B
↛C
, and C
↛B
. This means that if we start with #y requires B
and cast to #y requires A
, then we can now use C
as #y
when that wasn't possible before.
I guess the real question for this issue is this: Is there any possible way for a procedure definition to "learn" the apparent value of a category param?
concrete Type<|#x> {
// Is there a way for call to know what the apparent #x is when called?
@value call () -> ()
}
It seems like this would only be possible if call
had its own param that depended on #x
in a way that became more permissive if Type<#x>
was up-cast.
(I thought in circles a bit about the "consistent variance" rule, and decided that it doesn't apply to filter conversions. The summary is that contravariant types would always violate the rule if that was the standard for filter conversions.)
Maybe we should start with a specific use-case:
Should #x defines LessThan<#x>
be allowed for contravariant #x
?
#x
is only used as a key in a binary search tree and is never returned to the caller.#x.lessThan(x,y)
outputs values of type #x
to lessThan
.It's worth noting that this filter is already allowed for covariant #x
. Is that useful?
#x
to the caller, provided you already have those values.#x
from in the first place? Either initial construction, or from an implementation where #x
isn't a covariant param. (You can't get the values from another defines
filter.)Maybe the most important point is that contravariant params would have no use in concrete
categories without requires
or defines
filters: Without those, you could only store values, without returning them or even reasoning about them.
Another important point is that allows
filters are really only useful if you want to return
a certain type in place of the specified param. (Even that's a bit of a stretch, but whatever.)
Lastly, we still need to enforce variance rules when category params occur in filters for function params.
So, I think I've changed my position on this issue: We don't need to consider variance when validating filters for category params. This is safe because category-param filters cannot reveal information outside of the category after initial construction. In other words, param types (including their meta-types implied by the filters) are locked in when the type instance or type value is constructed.
Another way to look at it is as the params being input variables to the type-instance constructor, and the filters as the argument meta-types. If the output is the type instance, the original meta-types required by the function that constructed it aren't recoverable; they just artificially limit the inputs.
I think it's analogous to this:
@value interface Base {}
concrete Foo {
refines Base
}
concrete Bar {
refines Base
}
// ...
@type build ([Foo|Bar]) -> (Base)
build (x) {
// You can't recover the fact that only Foo and Bar were allowed as inputs.
return x
}
One possible remaining inconsistency is this:
Suppose #x
is contravariant, and #x defines Foo<#x>
, where Foo
has an invariant param. If the real #x
is A
, is it still safe to pass B
as an input if B
→A
? Yes, since the underlying object has A
locked in, i.e., any B
passed in will be converted to A
automatically.
So, B defines Foo<B>
is largely irrelevant except when calling a @type
function using B
in the substitution, i.e., locking B
in as #x
. This essentially means that filter checks for category params might not even be relevant outside of @type
-function calls.
concrete Type<#x|> {
#x defines Invariant<#x>
call<#y>
#y requires #x
(#y) -> ()
}
@type interface Invariant<#x> {}
In this example, if we convert a Type<A>
to a Type<B>
, then there is an implication that #y defines Invariant<B>
. On the other hand, this is never explicitly checked; we just have #y
→B
(→Invariant<B>
). Again, B defines Invariant<B>
isn't relevant here unless the underlying value has #x
=B
.
I guess the distinction here is this:
@type
functions. Filters are relevant here, because they determine what should be allowed in procedures. The filters are no longer relevant once the function returns. (Note that this includes using the param in a nested call to another @type
function.)This means that, in theory, you could safely have a type-instance that violates the filters for the category's params, as long as those params don't ever make it into the procedures for the category.
So, I think we really only need to check category-level param filters in these situations:
@type
function calls.Type<#x>{ foo }
.In other words, in only the situations where type-instances are used as type variables.
Such param filters were originally allowed, but later I decided that there should be a "contains" relationship between filters before/after variance conversions. For example, if
#x requires Y
(#x→Y
), then converting#x
fromA
toB
implies thatA requires Y
(A→Y
) should containB requires Y
(B→Y
), i.e.,A→B→Y
, implyingA→B
. The latter means thatrequires
implies either covariance or invariance of#x
.The intuitive reasoning is that
defines
andrequires
imply reading something from the parameter, e.g.,#x$equals(x,y)
reads theequals
function from#x
. This results in using a contravariant type for reading when only writing should be allowed.On the other hand, the parameter substitution is an immutable part of the object or function call, in the sense that any code using parameter
#x
uses#x
as originally assigned; not as whatever it might be converted to. For example, any call to#x$equals(x,y)
will use the "real"#x
; not whatever#x
might become via variance conversions.So, I think the entire argument for allowing these filters hinges on those filters never being externally accessible after parameter substitution. In other words, if they are allowed, then at no future point can there be a language feature that allows retrieving a parameter from a value or type. (For example, something like using pattern-matching in C++ to extract a
template
parameter.)