Open lachbaer opened 7 years ago
The occurence of the
??
operator causes an error, though its equivalent replacement witha != null
compiles.
That's weird. I don't understand why a != null
compiles, as T
could be struct and thus not have a !=
operator defined that supports comparing its value with null
.
When constraining with 'where T : class' the error does not occur. When constraining with 'where T : new()' it arises again, though value types currently cannot be 'T : new()'...
That's incorrect. structs can be used in generics constrained by where T : new()
, since all structs are guaranteed to have the default, parameterless, constructor (you may remember a brief discussion on that subject yesterday)
The problem probably arises from the specs that defines all the possible conversions for
??
and that doesn't allow this operator on non-nullable value types currently.
Checking a non-nullable value for null is a meaningless concept. However, whatever compiler trickery is being used to allow a != null
for a potential struct type to compile could be applied to ??
too, I guess. That would only make sense when using generics though.
That's weird. I don't understand why a != null compiles,
Because in a generic context 'a' is implicity converted to 'a?'. I guess that is because T is not constraint and can bei either a class or struct.
structs can be used in generics constrained by where T : new()
You're correct. The combination of 'struct, new()' is not possible. Didn't I read somewhere a proposal that 'new()' should be allowed on structs or was it different in previous versions? Nevertheless, thanks for the clarification. I've updated the initial post.
That would only make sense when using generics though.
My persuation 😄
That's weird. I don't understand why a != null compiles, as T could be struct and thus not have a != operator defined that supports comparing its value with null.
All structs are comparable to null. You can try that today:
public void Foo(int i)
{
if (i != null)
{
}
}
The pattern of comparing (object)default(T) == null
is also a mechanism used in lots of generic code to provide specializations for the class/struct codepaths. The jit will understand that thsi is a constant true/false for class/value-types, and will elide the other codepath.
Even when changing the declartion to T? d it still produces that error, though 'a' and 'b' could be implicitly converted to the result type.
You can only declare 'c' or 'd' as "T?" if T has the "struct" constraint. Saying "T?" means Nullable<T> where T : struct
so, a normal T wouldn't apply.
Shall the specs be extended to support U = T ?? W with 'T, U, W' all being value types
I'm not sure how that would make sense. If T was a value type here, it would never be null. so this would just be equivalent to "U = T".
d = a ?? b; // CS0019 Operator '??' cannot be applied to operands of type 'T' and 'T'
The reason for this is that ?? is intended to unwrap the type on the left if the type on hte left is Nullable<T>
. In other words, today i can write:
int? x ...;
int y ...;
int z = x ?? y;
So, it's not the case that "x ?? y" translates to "x != null ? x : y". It can translate to "x != null ? x.Value : y".
Since we don't know if T is a value or reference type, we cannot effectively do the transformation. However, when T is constrained to "class" or "struct" (and in the latter case, is a T?), then we can do the transformation.
@CyrusNajmabadi Or actually I guess what I'm asking is what's so bad about outputting x != null ? x : y
for x ?? y
if T
is unconstrained? If T
is unconstrained, it doesn't matter whether you actually unwrap or not because if you did somehow unwrap, you'd be wrapping right back up as soon as you use the expression because it's typed T
.
For example, say T
is int?
:
void T Foo<T>(T x, T y)
{
T z = x ?? y;
return z;
}
Becomes:
void int? Foo<int?>(int? x, int? y)
{
int? z = x ?? y;
return z;
}
x ?? y
is replaced with the unwrapping expression:
void int? Foo<int?>(int? x, int? y)
{
int? z = (int?)(x == null ? x.Value : y);
return z;
}
Which is the same as:
void int? Foo<int?>(int? x, int? y)
{
int? z = x == null ? (int?)x.Value : (int?)y;
return z;
}
Which is the same as not worrying about wrapping in the first place because the static type is T
(int?
) not Nullable.GetUnderlyingType(T)
(int
):
void int? Foo<int?>(int? x, int? y)
{
int? z = x == null ? x : y;
return z;
}
@CyrusNajmabadi You nicely explain why it currently is so and what could be done to circumvent this issue. FYI, I alread knew this ;-) This issue is rather about whether it would make sense to extend the specs.
Shall the specs be extended to support U = T ?? W with 'T, U, W' all being value types
I'm not sure how that would make sense. If T was a value type here, it would never be null. so this would just be equivalent to "U = T".
That would be the outcome, yes. But I'm asking in favor for generics primarily. As @jnm2 will perhaps concur, the null-coalescing operator is primarily perceived as being a synonym for x != null ? x : y
. (If you believe that is only my view on things, then I'd be happy to start a poll 😉 )
@jnm2 That would be something we could consider (i haven't convinced myself it works properly given all the permutations of T/T?/object/constraints that you could have on the left/right side).
I think the concern is that it's quite possible that while we could define some semantics here, it's more likely that the code in question is smelly, and the person writing it should rethink what they're trying to do. For example, it's quite possible that really they should be doing "where T : class" if they really need this.
--
In other words, it's not necessarily a matter of "could we do this" but "shoudl we do this". Is this a case where someone hit the issue and just wants it silenced, versus updating their code appropriately.
Another curiosity:
public int TheFunction<T>(T a, T b) where T : struct {
T c;
c = (a != null) ? a : b; // CS0019: Operator '!=' cannot be applied to operand of type 'T' and '<null>'
int d = 0;
d = (d != null) ? 1 : 2; // compiles correctly
}
As @CyrusNajmabadi explained the int != null
comparison works as described. Shouldn't the the first comparison for the 'c' assignment work also? For me this looks like a discrepancy.
Also the following is okay in a non-constrained generic...
T f = a;
var hash = f?.GetHashCode();
...but when used with a value type it doesn't work
int g = 0;
var hash = g?.GetHashCode();
It's okay that way. But I want to emphasize that a value type in a null-conditional expression in a generic works. I would expect ??
to behave the same way, just without casting the result to 'T?' in case that is not necessary.
@CyrusNajmabadi
it's not necessarily a matter of "could we do this" but "shoudl we do this
That's why I started my question with "Shall the specs be extended" and not "Could the specs be extended" 😁
That's fair. IMO, no. I prefer people being explicit about how value types and reference types be treated in their generic code if they are going to allow for both and allow for null to be passed in. But i could be convinced otherwise.
I, as the user of the language, experience a discrepancy here.
I fear code duplication is required in those cases just to respect the two cases. One function with z = x
and the other with z = x ?? y
where all the rest of the generic function is absolutely the same.
PS: I don't currently have an actual issue with it, just encountered the phenomenon while coding.
I'll bring this up with the LDM.
I think the concern is that it's quite possible that while we could define some semantics here, it's more likely that the code in question is smelly, and the person writing it should rethink what they're trying to do. For example, it's quite possible that really they should be doing "where T : class" if they really need this.
Why would that be any more smelly than comparing unconstrained T
with null? They have the same use cases.
@jnm2 Concerning your https://github.com/dotnet/csharplang/issues/505#issuecomment-297787245 the result should be of type 'int' if the type of both operands is 'int' additionally to the 'T?' case.
They have the same use cases.
The usecase currently for this is to actually provide specialized logic for reference types and value types. i.e. so you can have:
// this is not actually testing if a specific variable is null, just if T is something that can be null;
if ((object)default(T) == null) {
// jit will only emit this code for reference types.
}
else {
// jit will only emit this code for value types.
}
This is slightly different from what's being asked here. The ask is for the user to be able to operate generically over variables which could be struct or reference types, and which could be null or not if they are a reference. Or could be potentially null if they were a "Nullable" type.
I can see the desire for that (especially as people have turned out to love ?? and ?. as programming aids). However, it's not precisely the same thing :)
@CyrusNajmabadi What is allowed today is x != null ? x : y
. That is comparing unconstrainted T
with null. That is also operating generically over variables which could be struct or reference types, and which could be null or not if they are a reference. So my question is, how is x ?? y
doing anything fundamentally different?
Test<int>(3) // returns false;
Test<int?>(3) // returns false;
Test<int?>(null) // returns true;
Test<object>(new object()) // returns false;
Test<object>(null) // returns true;
bool Test<T>(T a) => a == null; // Unconstrained T, null.
So my question is, how is x ?? y doing anything fundamentally different?
As i've stated, i think it's reasonable to feel that it should operate that way :) I was just giving historical reasoning given that that "x ?? y" translates to two different forms for class/struct types, and since it isn't known what the type parameter type is until runtime, it didn't fit into either of those forms.
I see, and personally like, the argument that using the "class form" when expanding this out yields a result that is intuitive and desirable. As i've said, i've taken this up with the LDM to get further insight here. Not sure what else i can say at this point :)
@CyrusNajmabadi
Not sure what else i can say at this point :)
I'm just trying to get to the bottom of why you called it smelly. If you've changed your mind that answers my curiosity. =)
@CyrusNajmabadi,
All structs are comparable to null.
That is not correct. Try replacing that code with struct S {}
:
public void Foo(S i)
{
if (i != null) // error: operator != cannot be applied to S and null
{
}
}
The same applies if you constrain a generic parameter to struct
, the ability to compare T
to null
is then no longer available.
The pattern of comparing
(object)default(T) == null
is also a mechanism used in lots of generic code to provide specializations for the class/struct codepaths.
That definitely is smelly! 😱
@DavidArno I tried it. @CyrusNajmabadi is correct on this.
@lachbaer,
Really? My code compiles for you?
Err, i was imprecise. David is right. It's that most-structs-in-practiice are comparable to null. Usually because most structs implement == since they are intended to be thought of as values. And those structs are comparable to null (though it will always be true).
@CyrusNajmabadi,
That's a relief; thought I was going mad for a moment.
But you aren't thinking evil enough. I'm gong to start implementing !=
in my structs and have it return true or false randomly if compared to null to stop folk comparing structs to null in the first place 😈
That definitely is smelly!
Why? It reads quite well to me: "if this is a T whose default value is null, then go down this codepath". And "if this i a T whose default value is not null, go down this other codepath".
It allows for all sorts of optimizations for generic code.
@DavidArno I didn't use your code, I used one of mine existing structs. That one had an implicit double conversion and therefore it worked.
Usually because most structs implement == [...]
@CyrusNajmabadi could you clarify that, please?
struct S {
int value;
public static bool operator ==(S lhs, S rhs) => lhs.value==rhs.value);
public static bool operator !=(S lhs, S rhs) => lhs.value!=rhs.value);
}
S s;
if (s != null) { }
Indeed that compiles. But just by the look of the signature of ==
without any S?
I wouldn't expect that to happen. Then why does the comparison work for value types with equality operator and not for the ones without?
It's for back compat. This fell out of when we did nullable in the first place. Nullable<T>
'lifts' user defined operators**. So that you can compare one "S?" to another "S?" sensibly. i.e. "Nullable
Unfortunately, it then fell out that you could then compare a struct to null. We didn't add a warning at the time, and now it's a breaking change to add such a warning. We'd like to add a warning for this in a future 'warning wave'. And, indeed, we check for this in roslyn itself with a 'feature flag experiment'.
--
TLDR: it's a historical artifact, not a good thing. But, we can't change historical artifacts as it does break people.
--
** Note: if we didn't 'lift' user defined operators, you'd likely hate nullable. After all, you wouldn't even be able to do simple things like:
struct Fraction
{
operator +(...);
}
Fraction? f = ...
Fraction? g = f + ...
Lifting is core to making the experience around nullable actually good.
@CyrusNajmabadi reasonable explanation 😃
So, without an equality comparer the type cannot be compared at all (how should it) and with one the non-null side is implicitly lifted to 'S?' and when neither side is 'null' then 'S1.Value == S2.Value' is evaluated. Right?
and with one the non-null side is implicitly lifted to 'S?'
Both sides are lifted to "S?". If the value is null you get a "!HasValue" Nullable, and if the value is non-null, you get a "HasValue" Nullable :)
and when neither side is 'null' then 'S1.Value == S2.Value' is evaluated. Right?
Right. If either/both sides are null, you get null. If neither is null, you get ".Value == .Value".
But you're going to warn in the future on this kind of code, would that mean that VT == null
in the unconstraint generic context will also vanish? As I have said above, I doubt that this will be very productive and lead to code duplication.
I think that the current behaviour of ==
in an (unconstraint) generic context should stay as it is, because it makes sense to me. And ??
should be adapted to it as well. The ?.
shows nicely that it somehow observes that, lifting only in the generic context (given direct 'VT?.Method()' constructs).
But you're going to warn in the future on this kind of code
We're considering adding a warning in a 'warning wave' if you write a ==null
check that is known to always be true or false.
As I have said above, I doubt that this will be very productive and lead to code duplication.
The primary goal here was performance. i.e. here's my dedicated, no extraneous work, codepath for value types. And here's my dedicated, only necessary work, codepath for reference types.
You are right that leads to duplication. But the point of generics was to not get rid of duplicatoin in the library, it was to get rid of duplication in the code consuming the library, while also getting great perf from the implementation of the generic method on the CLR.
if you write a ==null check that is known to always be true or false
I understand that the current behaviour in generics will stay untouched then, correct? For me this means that the initial issue should be resolved.
You are right that leads to duplication. But the point of generics was to not get rid of duplicatoin in the library [...] while also getting great perf from the implementation of the generic method on the CLR
I believe that for the average programmer, like me, that experience is cumbersome and hard to understand. Even when looking at your Roslyn code, it looks to me as if you were using generics for a clear, overseeable and neat class layout, rather than performance. From todays OOP philosophy, how I learned it from primarily books, I understand generics to be a basic concept, and not just a way to yield performance to, and prevent code duplication from occuring in the actual nongeneric parts. When I understand you literally, you say "the point of generics was", in the past tense, then would you agree that todays demands on generics have a more OOP-philosophic-like touch?
The current syntax as it stands allows every programmer to choose between splitting their algorithm into specialized generic versions, thus yielding performance, or to avoid code duplication and concentrate on the actual work the generic types shall do. That is like in game programming (or to reduce cloud load) where specialized programmers know ways to achieve an optimized code by coding in special ways.
All that said, in my opinion generics should also provide ways that doesn't make code duplication necessary. And adapting ??
to be used with pure value types also (in generics) would lead to that direction further.
I understand that the current behaviour in generics will stay untouched then, correct?
To the best of my understanding, yes.
I believe that for the average programmer, like me, that experience is cumbersome and hard to understand.
That's quite possible. But, the language serves many domains and has many different desires pulling on it. A great example of this is all the work we're doing around 'ref'. This 'ref' is extremely specialized, and really isn't intended for the average, or general-purpose, case. As such, prioritizing for those sorts of needs is definitely far lower on our list than trying to accomplish the direct set of scenarios this language feature was intended to solve.
Very often we've taken the perspective that it's ok if something may be harder for the library writer and easier for the person consuming the library. With something like generics, many more people will consume it rather than produce it. And so we wanted the feature to enable library writers to create the sorts of APIs that we felt would be valuable on the consumption side. We didn't mind if it required the library writer to gain deeper knowledge to best harness this feature and deliver good results with.
I believe that for the average programmer, like me, that experience is cumbersome and hard to understand. Even when looking at your Roslyn code, it looks to me as if you were using generics for a clear, overseeable and neat class layout, rather than performance.
In our case, it was both. However, performance was definitely a core part of how we use generics in many circumstances. Indeed, Roslyn went through several early rewrites until we could get it to the performance we wanted. Just as i mentioned above, it was very important to us that people consuming roslyn would have a good, high performance, API, even if under-the-covers we had to do an enormous amount of work to get that sort of performance.
then would you agree that todays demands on generics have a more OOP-philosophic-like touch?
I don't really agree nor disagree with that statement :) Generics is a huge area and there are many different perspectives people have about it. I don't think OOP is one that is particularly stronger than the others, nor do i necessarily think that things have really shifted in that regard.
Indeed, the primary feedback we continuously get with generics is to make it so that it's possible to be even more specialized. i.e. to give means to write highly specialized code, even if that code has a lot of duplicated patterns, precisely so things can be nice and general externally, while super optimized internally.
allows every programmer to chose between
Yup. That's a general approach that happens a lot with C#. For example, lambdas allow for easier composition of functionality at the cost of indirection and potential allocation. Classes allow for easier inheritance, at the cost of the current implementation heap allocating and having GC costs. etc. etc.
The split between specialization and generalization has been part of C# since pretty much the beginning. :)
--
And, to be clear, i have no issue with the idea of making ?? work with unconstrained generics. I'm waiting to hear back from someone about this. However, they're on vacation, so it may take a while.
However, they're on vacation
How dare they 😀
Indeed, the primary feedback we continuously get with generics is to make it so that it's possible to be even more specialized
Do those feedbackers also request to get rid of the less specialized constructs? ;-) They are good and mostly sufficient already. There don't really need to be request to even more generalize them (with the exception of ??
at this very point). When users advance they certainly get at a point where they might miss some specialization and then they start to request it. So it is only logical that the primary feedback is about specialization.
This 'ref' is extremely specialized,
And though only being average I missed something like it and now make use of it. From my users perspective it has similarities with &
references from PHP. And those I use frequently.
That's a general approach that happens a lot with C# [...] The split between specialization and generalization has been part of C# since pretty much the beginning
Shall it stay that way, forever 😃
I encountered a curiosity. Given the following code:
The occurence of the
??
operator causes an error, though its equivalent replacement witha != null
compiles. The explicit condition works with value types for 'T' also. 'a' is converted toT?
implicitly for that condition.When constraining with 'where T : class' the error does not occur. When constraining with 'where T : new()' it arises again, ~though value types currently cannot be 'T : new()' what implies 'T' being a 'class' as the only alternative~.
Even when changing the declartion to
T? d
it still produces that error, though 'a' and 'b' could be implicitly converted to the result type.The problem probably arises from the specs that defines all the possible conversions for
??
and that doesn't allow this operator on non-nullable value types currently.Shall the specs be extended to support
U = T ?? W
Is it possible that in case 'T' is not a generic type, a warning or error is issued?
This will break nothing, because there currently cannot be existing code that makes use of it.