eclipse-archived / ceylon

The Ceylon compiler, language module, and command line tools
http://ceylon-lang.org
Apache License 2.0
396 stars 62 forks source link

Type inference bug #4408

Closed CeylonMigrationBot closed 8 years ago

CeylonMigrationBot commented 9 years ago

[@renatoathaydes] In the following code, the wrong type is inferred in the last call, and this does not compile:

class X<out Y>(shared Y y) {}
X<To> transform<From, To>(X<From> x, To(From) convert)
        => X(convert(x.y));

X<String> x1 = X("10");
X<Integer?> x2 = transform(x1, parseInteger); // Error

The error is on parseInteger:

Argument must be assignable to parameter convert of transform: Integer?(String, Integer=) is not assignable to Integer?(String|Integer)

But there is no way that From could be an Integer in the above code.

If the type parameters are explicitly given, it compiles:

X<Integer?> x2 = transform<String, Integer?>(x1, parseInteger); // OK

[Migrated from ceylon/ceylon-spec#1302] [Closed at 2015-05-24 23:26:22]

CeylonMigrationBot commented 9 years ago

[@renatoathaydes] Moved from #5561

CeylonMigrationBot commented 9 years ago

[@renatoathaydes] If the parameter Y is made non-covariant, then the error gets worse with both the argument x1 and the function parseInteger giving errors in the last line.

CeylonMigrationBot commented 9 years ago

[@gavinking] OK, so this one is a little unfortunate. It's not a bug, precisely, in the sense that the typechecker is behaving exactly as expected according to the rules in §3.6.5.

The issue is that the parameter type here is To(*Tuple<From,From,[]>) and the argument type is Integer?(*Tuple<String|Integer,String,[]|[Integer]>), so you can see there that String|Integer actually is being assigned to From.

The only fixes for this—while easy enough to implement—would be pretty nasty and adhoc (always ignore the first type parameter in a Tuple?) and so I'm not sure that fixing this is worthwhile. You can always write any one of these three options:

    X<Integer?> x2 = transform(x1, 
        parseInteger of Integer?(String));

    X<Integer?> x3 = transform(x1, 
        (String s) => parseInteger(s));

    X<Integer?> x4 = transform<String,Integer?>
            (x1, parseInteger);
CeylonMigrationBot commented 9 years ago

[@sadmac7000] It seems we should somehow be able to turn

Tuple<String|Integer,String,[]|[Integer]>

In to

Tuple<String,String,[]>|Tuple<String|Integer,String,[Integer]>
CeylonMigrationBot commented 9 years ago

[@gavinking] @sadmac7000 that would not solve the problem.

CeylonMigrationBot commented 9 years ago

[@sadmac7000] Hmm.

Function argument tuples are contravariant, correct? So a supertype should be assignable to them.

CeylonMigrationBot commented 9 years ago

[@gavinking] The issue is about type argument inference. Not subtyping.

CeylonMigrationBot commented 9 years ago

[@sadmac7000] Ah. I think I see. Right, it's assuming From = String|Integer

CeylonMigrationBot commented 9 years ago

[@sadmac7000] Well, one patch would be to deconstruct as I mentioned on the way in to type inference, run inference with each type singly, then set the result to the intersection of each individual inference.

CeylonMigrationBot commented 9 years ago

[@jvasileff] It seems like an out hint on From would help, since it would push convert's From to Nothing, with x's From giving us String. And it does work. But isn't out the default?

Although, an in hint also works??? Is the in algorithm doing a better job of taking into account all of the constraints, including Tuple.First for the Callable's Arguments?

Shorter reproductions:

void f<T>(Anything(T) g) {}
f(parseInteger);

class InBox2<in T, in U>() {}
void func<V>(V t, InBox2<V, V> b) {}
func("", nothing of InBox2<String|Integer, String>);

My other idea is a bit crazier: standardize on Anything for Tuple.Element for Callable.Arguments, at all levels of nesting. In fact, this is something I wanted once before. By using the canonical form for argument tuples, Callables are asking for more than they actually need.

An even crazier idea would be to have Tuple satisfy a NonIterableTuple<First, Rest> interface, and use that for Callables.

CeylonMigrationBot commented 9 years ago

[@gavinking]

Although, an in hint also works???

Ahyes, of course. That's the right solution, as we can see if we write the thing in curried form:

class X<out Y>(shared Y y) {}

X<To> transform<in From, out To>
        (To(From) convert)(X<From> x)
        => X(convert(x.y));

X<String> x1 = X("10");
X<Integer?> x3 = transform(parseInteger)(x1); //No error

That's quite clearly a correct signature.

Is the in algorithm doing a better job of taking into account all of the constraints, including Tuple.First for the Callable's Arguments?

For in parameters, the constraints are upper bounds, combined using intersection, not lower bounds combined using union. So we get <String|Integer> & String which is String.

It seems like an out hint on From would help, since it would push convert's From to Nothing, with x's From giving us String. And it does work. But isn't out the default?

Um. Wait. Let me check the spec. Perhaps there is some difference. I must admit it's a surprise to me that this would change the behavior.

CeylonMigrationBot commented 9 years ago

[@gavinking]

Let me check the spec.

I don't see anything obvious in the spec for this stuff, but I do have an explanation for the difference in behavior: ExpressionVisitor:3985-3987. It does something for covariant type parameters that it doesn't do for invariant type parameters. It's unclear to me if this is:

  1. a bug,
  2. an unspecified feature, or
  3. something that is actually consistent with the spec but non-obvious from the text of the spec.

Investigating.

CeylonMigrationBot commented 9 years ago

[@gavinking] This suspicious code was part of the implementation of #3869.

And note that there is a difference between @RossTate's original statement of #3869—which is what the typechecker implements—and what I wrote down in the spec.

So the question remains: is it a feature or a bug. This issue suggests it is a bug, and that the spec is right.

CeylonMigrationBot commented 9 years ago

[@gavinking] OK, so after seeing how many tests would be broken by changing that code to give the exact same behavior for the invariant and covariant cases, I now conclude that it is a feature, and the spec needs to be adjusted to document this difference.

Anyway, thanks to @jvasileff, we already have our solution to @renatoathaydes's problem.

CeylonMigrationBot commented 9 years ago

[@RossTate] I'm pretty sure this is a bug.

  1. For X<String> to be a subtype of X<From>, From must be a supertype of String.
  2. For Integer?(*Tuple<String|Integer,String,[]|[Integer]>) to be a subtype of To(*Tuple<From,From,[]>), To must be a supertype of Integer? and From must be a subtype of String|Integer and of String.
  3. This gives us that String <: From <: String&<String|Integer> must hold, and String&<String|Integer> is equivalent to String, so From should be inferred to be String no matter what strategy you use.
  4. The return type is covariant with respect to To, and we have that Integer? <: To must hold, so To should be inferred to be Integer?.

Did I screw up my reasoning?

CeylonMigrationBot commented 9 years ago

[@gavinking] @RossTate the issue is that we treat invariant type parameters very nearly (but not quite) as if they were covariant type parameters. In this case we have something that is by rights contravariant but it's not explicitly marked as such. Marking it contravariant fixes the problem.

To must be a supertype of Integer? and From must be a subtype of String|Integer and of String.

You're assuming facts not in evidence ;-) That is what happens if we know that From is contravariant. In this case it's being treated as if it were covariant, so the reasoning is:

To must be a supertype of Integer? and From must be a supertype of String|Integer or of String.

CeylonMigrationBot commented 9 years ago

[@RossTate] The reasoning I laid out for From is completely independent of its variance. Do you mean to say that you don't recognize that Callable is contravariant with respect to its input type?

CeylonMigrationBot commented 9 years ago

[@gavinking] It is the variance of type parameter being inferred that determines if we look for a lower bound or an upper bound. The variance of the parameter of Callable affects that but it is not the only thing that does.

Again, the algorithm can fail for invariant type parameters, since the problem is over constrained so we just do the best we can according to very simple rules.

Sent from my iPhone

On 18 May 2015, at 7:09 pm, Ross Tate notifications@github.com wrote:

The reasoning I laid out for From is completely independent of its variance. Do you mean to say that you don't recognize that Callable is contravariant with respect to its input type?

— Reply to this email directly or view it on GitHub.

CeylonMigrationBot commented 9 years ago

[@RossTate] But both the upper bound and the lower bound on From are String, so that doesn't explain anything.

If I had to take a guess as to what's wrong, it's that you are deducing the wrong bounds, the process for which has nothing do with the variance of the type variable at hand.

CeylonMigrationBot commented 9 years ago

[@gavinking] PS I get what you're saying here: that in the invariant case we get a set of lower bounds and upper bounds depending on the location in the parameter list that the type parameter occurs. (Unlike in the case of covariance where we get only lower bounds and contravariance where we get only upper bounds.)

I suppose we could consider taking both bounds into account if they're not conflicting. I guess the rules don't need to be -that- complicated.

Sent from my iPhone

On 18 May 2015, at 7:09 pm, Ross Tate notifications@github.com wrote:

The reasoning I laid out for From is completely independent of its variance. Do you mean to say that you don't recognize that Callable is contravariant with respect to its input type?

— Reply to this email directly or view it on GitHub.

CeylonMigrationBot commented 9 years ago

[@gavinking] Right now we never compute two bounds. On the theory that we might no what to do with them.

Sent from my iPhone

On 18 May 2015, at 7:26 pm, Ross Tate notifications@github.com wrote:

But both the upper bound and the lower bound on From are String, so that doesn't explain anything.

— Reply to this email directly or view it on GitHub.

CeylonMigrationBot commented 9 years ago

[@RossTate] We're speaking past each other, so how about this: explain how you come to the error step by step, in the same way I explained how I get the correct result regardless of whether you use only the upper bounds or only the lower bounds. Then I can look through each step and either understand why you are right in getting the error or understand how exactly you are wrongly getting the error.

CeylonMigrationBot commented 9 years ago

[@gavinking] The issue is that we are treating something that you would say is an upper bound as a lower bound. This almost always works I practice, but not in this case.

Sent from my iPhone

On 18 May 2015, at 7:34 pm, Ross Tate notifications@github.com wrote:

We're speaking past each other, so how about this: explain how you come to the error step by step, in the same way I explained how I get the correct result regardless of whether you use only the upper bounds or only the lower bounds. Then I can look through each step and either understand why you are right in getting the error or understand how exactly you are wrongly getting the error.

— Reply to this email directly or view it on GitHub.

CeylonMigrationBot commented 9 years ago

[@RossTate] Is there any good reason why? That can only get you worse results.

In other words, do you have an example where this helps?

CeylonMigrationBot commented 9 years ago

[@gavinking] Well, because it's simpler, and almost always works.

Look I'm on my phone right now but I will take a closer look later at whether I can come up with something reasonably predictable that computes two bounds and then throws one or the other away. It's probably doable but I need to see the code and have the spec in front of me.

Sent from my iPhone

On 18 May 2015, at 7:43 pm, Ross Tate notifications@github.com wrote:

Is there any good reason why? That can only get you worse results.

— Reply to this email directly or view it on GitHub.

CeylonMigrationBot commented 9 years ago

[@renatoathaydes] @gavinking wrote:

In this case we have something that is by rights contravariant but it's not explicitly marked as such. Marking it contravariant fixes the problem.

I think you mean covariant, not contravariant, given that what fixed the function signature was marking From with the out tag.

Anyway, shouldn't the same principle, then, be used to make Iterable.map's type parameter (and many other similar functions in the Ceylon language module) covariant? I see no function's type parameters in the Ceylon language module marked as covariant or contravariant.

Can you show me an example not involving type inference where it would be useful to mark function type parameters as in or out (I just want to learn, hope the question does not sound confronting)?

CeylonMigrationBot commented 9 years ago

[@gavinking]

I think you mean covariant, not contravariant, given that what fixed the function signature was marking From with the out tag.

No. As I said above, the correct fix is to mark it in. It is conceptually contravariant.

Anyway, shouldn't the same principle, then, be used to make Iterable.map's type parameter (and many other similar functions in the Ceylon language module) covariant?

Well no. The type parameter of map() is conceptually covariant, so the default strategy works just fine. This passes the typechecker:

{Integer?+} result = {"hello", "world"}.map(parseInteger);

Can you show me an example not involving type inference where it would be useful to mark function type parameters as in or out (I just want to learn, hope the question does not sound confronting)?

No. There is no such case. It is purely a hint to type inference. And it is only in that we typically ever need (as in this case). It's basically never necessary to explicitly mark a function type parameter out. At least, that has been the case up until now.

What I'm not sure about is: if we fix this "bug", which only affects a tiny tiny number of usecases, and which is easily fixed by adding the explicit in annotation, will that mean that the default behavior is worse for the much more common case of conceptually covariant type parameters, forcing us to use an explicit out annotation much more frequently.

I'm not sure.

CeylonMigrationBot commented 9 years ago

[@jvasileff] I don't follow that. It seems that from a conceptual standpoint:

With out, the type parameter From wants to be Nothing. That works great for the callable. But the x argument pulls it up to String (which is still fine for the callable).

With in, the type parameter From wants to be Anything. That works great for x since a String is an Anything. But it's no good for the callable, which can't accept Anythings. So with the two constraints imposed by the Callable, it gets pulled back down to String (which is still fine for x).

Both work here, which makes sense to me. What doesn't quite make sense to me is why a hint is needed at all.

FWIW, it appears to make no difference if the two contravariant uses of the type parameter are within the same argument or two separate arguments:

    class InBox1<in T>() {}
    void func1<V>(InBox1<V> b1, InBox1<V> b2) {}
    func1(nothing of InBox1<String|Integer>, nothing of InBox1<String>);
    // Argument must be assignable to parameter b2 of func1: InBox1<String> is not assignable to InBox1<String|Integer>

    class InBox2<in T, in U>() {}
    void func<V>(InBox2<V, V> b2) {}
    func(nothing of InBox2<String|Integer, String>);
    // Argument must be assignable to parameter b2 of func: InBox2<String|Integer,String> is not assignable to InBox2<String|Integer,String|Integer>
CeylonMigrationBot commented 9 years ago

[@RossTate] @jvasileff hits upon the same thing you're saying that I'm confused by. You keep talking about in versus out annotations serving as hints to inference. Those hints are relevant elsewhere, but not here. So I'm wondering if you understand the source of the problem. But you're on your phone, so just wait until you've had a chance to sit a proper computer, heheh.

CeylonMigrationBot commented 9 years ago

[@gavinking]

Those hints are relevant elsewhere, but not here.

That's just not right.

Consider:

interface Something<T> { ... }

Something<T> something<T>(Something<T> s1, Something<T> s2) => .... ; 

Something<Foo> sfoo = ... ;
Something<Bar> sbar = ... ;

value result = something(sfoo, sbar);

Now, without knowing anything more about the semantics of Something and mutable(), how are you going to tell me if I should infer Something<Foo&Bar> or Something<Foo|Bar> here?

You can't. You need some extra information. There are cases where & is appropriate, and other cases where | is appropriate. (The | case is more common, since that is what you want for e.g. MutableLists.)

CeylonMigrationBot commented 9 years ago

[@RossTate] Uh, you should get an error (assuming you meant Something to be invariant).

But that doesn't matter here. I said the hints are relevant elsewhere; just not here.

CeylonMigrationBot commented 9 years ago

[@gavinking]

Uh, you should get an error (assuming you meant Something to be invariant).

Why?

CeylonMigrationBot commented 9 years ago

[@gavinking] So I have implemented what I believe Ross is getting at in @2b04d8f, and it does appear to correct this corner case without affecting much else—except one test I had which was very similar to to this case.

But what I don't like is this rule for picking between the upper and lower bounds. I can't find any particularly good justification for it, nor do I even really know if it's correct or not.

CeylonMigrationBot commented 9 years ago

[@RossTate]

Uh, you should get an error (assuming you meant Something to be invariant).

Why?

Because Something<Foo> is not a subtype of either Something<Foo&Bar> or Something<Foo|Bar>. Unless Foo and Bar are equivalent types, there is no valid type argument for that call.

But what I don't like is this rule for picking between the upper and lower bounds. I can't find any particularly good justification for it, nor do I even really know if it's correct or not.

Ah, I agree that there's no justification for your implementation of pickBound. Here's what you should do. If the return type is covariant with respect to that type variable, then you should use the lower bound, and if it's contravariant then you should use the upper bound. The justification is that this result in the most precise return type possible. In other words, any other type argument you could use that variable would result in a strictly less useful return type. This doesn't suggest what to do for invariant type variables. And for bivariant/independent type variables (such as From in this example), it doesn't matter which you do (reification aside); so long as the lower bound is a subtype of the upper bound then any type argument in that range would be valid and would result in the same return type.

Right now, if you instead wanted to be consistent with what you had been doing, then you would have pickBound pick the lower bound unless the type variable had been annotated with in. Because that doesn't matter for bivariant/independent type variables like From in this example, that would still work for this example and would keep everything else like it had been before.

Since you're recompiling everything anyways at the moment, here's an experiment worth running. I'd be interested to learn what happens if you made pickBound fail if the type variable occurs invariantly and the lower bound and upper bound are not equivalent. I've seen a number of errors reported due to incorrectly guessing in this situation, but I don't know how much type checked code relies on guessing here.

CeylonMigrationBot commented 9 years ago

[@gavinking]

Because Something<Foo> is not a subtype of either Something<Foo&Bar> or Something<Foo|Bar>. Unless Foo and Bar are equivalent types, there is no valid type argument for that call.

I don't see how that can possibly be right. We have plenty of examples like Array and ArrayList and HashMap which prove that, from a practical point of view, it does make sense to choose a union of argument types even for an invariant return type.

I even have other cases where it makes sense to choose an intersection.

Ah, I agree that there's no justification for your implementation of pickBound.

Right, but nor, AFAICT, is there any other especially better implementation of pickBound() that would give me better results in practice. This implementation does seem to give extremely good results on all the examples I have.

Here's what you should do. If the return type is covariant with respect to that type variable, then you should use the lower bound, and if it's contravariant then you should use the upper bound.

Well OK so you're saying essentially the exact same thing that I've been saying the whole time, it's just that you think we should determine the variance of a type parameter implicitly by inspecting the schema of the function, whereas I think that's totally inconsistent with what we do for classes and other type schemas where the variance of a type parameter is always explicit.

CeylonMigrationBot commented 9 years ago

[@gavinking]

I've seen a number of errors reported due to incorrectly guessing in this situation, but I don't know how much type checked code relies on guessing here.

The corner cases where inference fails are seriously quite tiny in number compared to the cases where it does the right thing. Almost any other implementation of pickBound() here causes a whole bunch of tests to fail.

CeylonMigrationBot commented 9 years ago

[@RossTate]

Because Something<Foo> is not a subtype of either Something<Foo&Bar> or Something<Foo|Bar>. Unless Foo and Bar are equivalent types, there is no valid type argument for that call.

I don't see how that can possibly be right. We have plenty of examples like Array and ArrayList and HashMap which prove that, from a practical point of view, it does make sense to choose a union of argument types even for an invariant return type.

I can see other examples where that is true, but not the one you gave here. something's inputs are invariant wrt T, so it would be unsound to accept that code. If they were covariant wrt T, that would be a different story (even keeping the output invariant wrt T).

CeylonMigrationBot commented 9 years ago

[@gavinking]

MutableList<T> join<T>(MutableList<T> list1, MutableList<T> list2) {
    value list = ArrayList<T>();
    list.addAll(list1);
    list.addAll(list2);
    return list;
}

MutableList<Foo> fooList = ... ; 
MutableList<Bar> barList = ... ;
MutableList<Foo|Bar> = join(fooList, barList);

Nothing unsound about that.

CeylonMigrationBot commented 9 years ago

[@gavinking] Oh, sorry, sure, yeah, I got it. I should have tried to compile that code first :-) So my example was nonsense.

CeylonMigrationBot commented 9 years ago

[@gavinking] OK, so here's a much better example, that actually does pass the typechecker:

MutableList<U|V> join<U,V>
        (MutableList<U> list1, MutableList<V> list2) {
    value list = ArrayList<U|V>();
    list.addAll(list1);
    list.addAll(list2);
    return list;
}

MutableList<Foo> fooList = ArrayList<Foo>(); 
MutableList<Bar> barList = ArrayList<Bar>();
MutableList<Foo|Bar> result = join(fooList, barList);
CeylonMigrationBot commented 9 years ago

[@gavinking] Well, that doesn't really demonstrate my point either. It's 4AM here. Brain shutting down.

CeylonMigrationBot commented 9 years ago

[@gavinking] OK, I have slept and can probably express coherent thoughts again. So to recap: we still have a situation where the algorithm for covariant and contravariant type parameters is exactly right, but for the invariant case we do something ad hoc. Both versions of the algorithm had something a bit nasty buried in them, but I would say the nasty thing buried in the new version (pickBound()) is actually the more nastier of the two, though it seems to result in better behavior for a very small number of corner cases.

Again: what we have works perfectly great if you are able to explicitly make a type parameter as in or out, sidestepping the nastiness.

So what @RossTate's argument boils down to is this: we should—only in the case of a generic function, but not for classes—actually infer the variance of a type parameter by looking at how it occurs in the return type. This would relieve the developer of having to add an explicit in or out annotation in certain cases.

I don't think that's a bad idea, and it might further improve things, but it still leaves me with lots of cases where I still don't know what to do: what about for invariant classes like ArrayList()? what about if the type parameter never even occurs in the return type, which is the case here? I would still need a tie breaker in these cases, and I guess the tie breaker would be to use the algorithm for covariant type parameters. I'm not sure how will it will work in practice but I'm willing to give it a try.

CeylonMigrationBot commented 9 years ago

[@gavinking] I have implemented this, and it does seem to be an improvement, on balance. Still need to write a spec!

CeylonMigrationBot commented 9 years ago

[@renatoathaydes] Wow, glad my small example led to such a lengthy discussion on type theory :)

But I think it's important this gets a good solution as this kind of function is very common, I would suggest, in Ceylon functional code.

And sorry Gavin about my previous comment, I see that you had it right with the contravariant argument... I was also writing late at night, you know.....

CeylonMigrationBot commented 9 years ago

[@jvasileff] Based on the idea that (pre-patch), inference in the presence of in or out hints worked perfectly, I believe the notes below make sense. The hints should be easily inferred, when they even matter (which they don't in Renato's case, I think that's just a bug). I'm of course taking for granted that the rest of the impl/bounds checking/whatever "just works", as it seems to for non-invariant cases.

Inferring variance hints:

Given that constructors don't have a return type in the same sense that functions do, the above will be performed as if they return void. Of course, explicit annotations always rule.

Note: so long as inference is performed in the case of invariant/over defined situations, discontinuities are unavoidable. The question is only as to the direction of the discontinuity—if a discontinuity happens due to the addition of a contravariant argument or a covariant argument.

The above rules for curry results in the correct:

Return(*Rest) curry<out Return, in Argument, in First, in Rest>

because Argument, First, and Rest are all contravariant in the argument list.

For TreeSet, the result is the reasonable and consistent:

TreeSet<Object> x = TreeSet((Object x,Object y)=>…) // “in” T
TreeSet<Integer> x = TreeSet((Integer x,Integer y)=>…) // “in” T
TreeSet<Integer|String> x = TreeSet((Object x,Object y)=>…, “hello”, 1) // “out” T

The only thing that is really subjective here is preferring the covariant approach for invariant parameters. But this seems to work very well, and is much more convenient than not performing inference at all in the invariant case.

CeylonMigrationBot commented 9 years ago

[@jvasileff] The notes above cheated with TreeSet by pretending constructors return void, which of course won't work. But, everything works out with the very small adjustment of being less sure of invariant type parameters, as follows:

  1. If the type parameter is explicitly in or out, go with that. Otherwise,
  2. If the return type is covariant or contravariant in the type parameter, go with that. Otherwise,
  3. If the arguments are covariant or contravariant in the type parameter, go with that. Otherwise,
  4. Default to out.

Defaulted parameters that are not used should be disregarded (e.g. TreeSet with only compare.)

The is very simple and intuitive (go for the most useful type for someone, otherwise fallback to out), and it works well.

I'd be very interested to hear any reasons why this won't work.

CeylonMigrationBot commented 9 years ago

[@RossTate] Well, that's still underspecified. Consider the following:

class Reference<in Input, out Output>(Input init) given Input satisfies Output {...}

According to your specification Reference("Hello") would result in either Reference<String, String> or Reference<Anything, Anything> depending on whether you inferred Input first (resulting in the former) or Output first (resulting in the latter).

CeylonMigrationBot commented 9 years ago

[@jvasileff] The notes above don't apply in that example - you are explicitly specifying in and out. In other words, this strategy doesn't address what happens after you've determined the preferred variance.

CeylonMigrationBot commented 9 years ago

[@RossTate] Hence "underspecified". In other words, those rules aren't enough for a programmer to predict what the result will be.

CeylonMigrationBot commented 9 years ago

[@jvasileff] Ok, right, makes sense. But the strategy does at least seem sufficient to replace the previous approach towards invariant parameters that broke in Renato's example. And it does so in a way that preserves the benefits of the previous approach (e.g. handling of TreeSet).

Handling the Reference case too would be a nice addition.