Closed apblack closed 6 years ago
I'd go for altogether semantics, definitely, but keep the free order. The key interpretation for me is that with the all together semantics, any request can be affected by one and only one alias or excludes clause. If people want e.g. to write the clauses in alphabetical order, why shouldn't that fine?
I'm also sympathetic to the altogether semantics (parallel reassignment). Are there any expressivity barriers that might result? I.e. is there something that one might want to write (i.e., not just puzzlers that make everyone's head hurt) that can be easily expressed in the one-at-a-time semantics but not in the other?
is there something that one might want to write... that can be easily expressed in the one-at-a-time semantics but not in the other?
I think it's the other way around: altogether semantics is altogether simpler. I think of it this way --- alias and exclude clauses are saying what should happen for requests coming up to that parent. Any given request name can only be handled in one way: passed through (no clause), passed through with a different name (aliased), or not passed through (exclude).
so basically the "lhs" of alias clauses + "lhs" of excludes (excludes don't have an RHS) is the set of request names to be processed - and each request name can only be processed one way.
I'd go for altogether semantics, definitely, but keep the free order.
We could do that — which is not something that I had thought of.
On reflection, I don't think that its a good idea, though. Excluding something and then aliasing to it afterwards just seems wrong. I agree that we can give it meaning, but it will still look confusing. It seems cleaner to me to require that the exclude
s come at the end.
To answer @KimBruce's question: I initially thought that first excluding x
and then aliasing something else to x
would require a additional level of trait declaration. But @kjx convinced me that this was not the case, because we can allow the programmer to just omit the exclude, with the same semantics. That is, we can say that creating an alias attribute y
will override any y
that would otherwise be reused.
Excluding something and then aliasing to it afterwards just seems wrong
Along with aliasing the same name twice, presumably excluding the same thing twice, and having a local declaration and alias for the same thing.
The other way to think about that case is that aliases (like local definitions) will override without explicit syntax (like Java, but unlike C#). Why is excluding something then aliasing afterwards any worse than excluding something and giving a local definition afterwards?
If we really believed in "excludes", we'd delete the "is overriding" annotation and require all parental definitions to be excluding before that name could either be bound by an alias, or a local definition.
Why is excluding something then aliasing afterwards any worse than excluding something and giving a local definition afterwards?
It's not, if you think about the alias and exclude clauses as creating a new trait by defining deltas on an old trait. However, if instead you think of the alias and exclude clauses as making incremental modifications to the reused trait, then after excluding x
, x
won't be there any more, so it doesn't make sense to say alias y = x
.
If we really believed in "excludes", we'd delete the "is overriding" annotation and require all parental definitions to be excluding before that name could either be bound by an alias, or a local definition.
Not at all. exclude
isn't there to allow overriding. It's there to indicate which of two conflicting definitions should be kept. This has nothing to do with overriding.
Perhaps what you mean is: if we really believed in allowing two used traits to have equal priority, with neither overriding the other, then we would apply the same rule of equal priority between the inherited attributes and those from traits, and between the reused attributes and those with local definitions. I have some sympathy for the first idea (no priority between inherited and reused attributes), and none at all for the second (giving local attributes priority over reused attributes seems like the obviously right thing).
if we really believed in allowing two used traits to have equal priority, with neither overriding the other, then we would apply the same rule of equal priority between the inherited attributes and those from traits,
yes, I think this is a simplification we should make.
giving local attributes priority over reused attributes seems like the obviously right thing
this is what we're used to, which is why it may feel "obviously" correct for us. But C# doesn't do this: you have to mark overriding methods as overriding. Java imported that as @overrides, which Josh Block now recommends you use pervasively. Swift, Kotlin & Ceylon all require annotating all overriding methods.
The other argument in favour of mandatory "overrides" declaration is that they catch the cases where you didn't mean to override but did accidentally, and the cases where you did mean to override but didn't. But I think relying only on "excludes" can catch both cases (just differently): if you accidentally override, you'll get a "structure clash" (trait composition error), if you meant to override but don't, hmm, I think you'll either exclude a non-existent method, or fail to define an excluded method.
so, I'm afraid you're just convincing me that (almost-fully) symmetric reuse is the way to go. Almost. because local definitions don't get alias or exclude clauses, but after that, everything is symmetric.
The other case this would kill is using exclude to delete a method, which I guess could be justifiable as "reverse inheritance" because our subclasses and subtypes are separated. At this point, though, I think such designs are evil and we shouldn't care about them! Just divide up your code into traits properly, and all should be well...
@kjx: you might be able to persuade me to make is override
annotations required. Your argument against this has always been that this is something that a dialect could do, and therefor should be left to a dialect. We have also talked vaguely about annotations not changing the core semantics, although public
annotations break this rule.
Pragmatically, this would be a bit hard to introduce into minigrace, because it would break the compiler, but it would be hard to find all of the places where we need the annotations until it is in the compiler.
I've said this a dozen times, and it's in the original trait papers too: exclude
is not there for theoretical reasons, but for pragmatic ones. It was Nathanael Schärli's idea, based on real-world experience. When some trait that you don't have control over acquires a new method, your program might break because of a trait conflict. You then have two choices:
Write a letter to your vendor (think: Oracle, or some group of twinkie-eaters on github), and wait for them to partition the trait so that you can reuse just the bits that you want, or
exclude
the bits you don't need yourself.
In my world, the choice is clear.
will it depends on the trait model.
you may be able to derive from the trait, but remove or mark those methods as abstract / required (not clear what required in a subtrait means if there's definition in an immediate super trait - does it mean that you get the superclass definition?
or you may be able to resolve any conflict with just alias & potentially forwarding local definitions.
override and exclude do seem to be some kind of inverse of each other; being orthogonal enough means we could do without override, because you'd have to exclude a parental definitions in exactly the same cases as where (in C#) you'd have an overriding local definition. Its just that if more than one parent defines something, it will have to be excluded more than once - whereas C# style "override" clobbers every inherited definition.
like I said above: "(almost-fully) symmetric reuse is the way to go. Almost. because local definitions don't get alias or exclude clauses, but after that, everything is symmetric."
The grammar in the spec allows alias and exclude clauses to be mixed together in arbitrary order:
What do such clauses mean? Minigrace, for example, treats the reuseModifiers as if all of the alias clauses are executed first, one at a time, in order, followed by all of the exclude clauses. Thus, for example:
prints
because the exclusion of
y
doesn't happen until afterb
is aliased toy
. However, ifexclude y
is replaced byexclude b
, there is no object composition error, because the method forb
is created and then excluded. Instead we get:This seems weird, but the spec gives no guidance on the true semantics of combinations off alias and exclude clauses.
I think that there are two reasonable interpretations of a series of alias and exclude clauses.
As an example of the difference, consider
Under the one at a time interpretation, the first alias produces a trait with two methods,
y
andx
, both of which refer to the method that prints "x". The second alias clause achieves nothing: it makesx
refer to the same method asy
, buty
already refers to the methodx
. Hence the program prints "x" twice.Under the all together interpretation, the both of the aliases are applied together; the effect is to interchange
x
andy
. Hence the program prints "y", and then "x".After some discussion, I think that the all together interpretation is the more reasonable, useful, and easy to understand. But it's incompatible with the current syntax, which allows mixing of the clauses in arbitrary order. I believe that we should modify the syntax so that all of the alias clauses come first, followed by all of the exclude clauses.
If we prefer to keep the current syntax, then we should adopt the one at a time semantics, and explain it as a series of operators with left-precedence.
I claim that the current situation is indefensible — which is a way of inviting someone to defend it ;-) I want to resolve this now, because I'm implementing this in SmallGrace, and I would like to fix minigrace to be more reasonable.