Open peter-perot opened 6 months ago
Why MaybeNullWhen(false)
instead of NotNullWhen(true)
Why
MaybeNullWhen(false)
instead ofNotNullWhen(true)
Since this is the way generic arguments are annotated, see here and search for "annotate generic out arguments".
You can only use NotNullWhen
in case you have concrete types.
To me, this looks correct. The type parameter to IValueHolder<out T>
is out
in variance, so the extension method call to TryGetValue<T>
is called as:
bool IValueHolder<string?>.TryGetValue<string?>(out string? value)
At this point, the note from a little later in the same bullet point of the linked nullability guidelines comes into play:
If the consumer defined the generic type as being nullable, then the attribute simply ends up being a nop, because a value of that generic type could always be
null
[.]
As evidence of this, the warning goes away (8.0.203!) if the call is made in one of these ways:
// here, `var` is inferred as `string?` due to the annotation
// but holder is still typed as `IValueHolder<string>`
if (holder.TryGetValue(out var value))
{
string copy = value; // no warning
}
if (holder.TryGetValue<string>(out string? value))
{
string copy = value; // no warning
}
…though IDE0001 kicks in for me for the type argument for the latter. Another option would be to implement TryGetValue
as an instance method:
public bool TryGetValue([MaybeNullWhen(false)] out T value)
{
value = HasValue ? Value : default;
return HasValue;
}
This also produces no warning, but I don't know what business value IValueHolder<out T>
is providing you, so I couldn't concretely say that this is a better solution. It couldn't be added to that interface because of the variance.
@chrisoverzero Yeah, thanks to your explanation regarding type inference, I think I found an explanation for the covariance thing and the different behaviors (covariance vs. no variance).
With covariance, the following assignment is valid:
IValueHolder<string?> left;
IValueHolder<string> right = new ValueHolder<string>("foo");
left = right;
With no variance, this assignment results into a compile time warning.
Therefore I think this is the reason why with covariance the following code
IValueHolder<string> holder = new ValueHolder<string>("foo");
holder.TryGetValue(out string? value); // (1)
is inferred as
IValueHolder<string> holder = new ValueHolder<string>("foo");
holder.TryGetValue<string?>(out string? value);
..., because an IValueHolder<string>
value can be assigned to an IValueHolder<string?>
value, and since we only provide the type of the out-var (string?
at (1)), but not the type of the generic type parameter, the type inference algorithm goes for holder.TryGetValue<string?>
, or in other words, the type of holder
is temporarily cast to IValueHolder<string?>
.
In contrast, when we do not use covariance the following code
IValueHolder<string> holder = new ValueHolder<string>("foo");
holder.TryGetValue(out string? value); // (1)
is inferred as
IValueHolder<string> holder = new ValueHolder<string>("foo");
holder.TryGetValue<string>(out string? value);
..., because an IValueHolder<string>
value cannot be assigned to an IValueHolder<string?>
value, and since we only provide the type of the out-var (string?
), but not the type of the generic type parameter, the type inference algorithm realizes that it cannot go for holder.TryGetValue<string?>
(as we have no variance here), so it chooses to infer holder.TryGetValue<string>
.
If some one of the language designers can confirm my interpretation of the behavior, it's obviously not a bug, and in turn we could close this issue.
Of course I could use an instance method instead of an extension method, or I could use var
instead of specifying the type explicitly, but my intention was to point out some peculiarity of the compiler we need to be aware of.
There is one more observation I made. The following code (as proposed by @chrisoverzero) seems to work correctly:
public static class ValueHolderExtensions
{
public static bool TryGetValue<T>(this IValueHolder<T> holder, [NotNullWhen(true)] out T? value)
{
value = holder.HasValue ? holder.Value : default;
return holder.HasValue;
}
}
But this is against the guidelines described here, so I wonder if the specification has changed in the meantime. IIRC writing T?
was not possible in earlier versions, so the MaybeNullWhen
annotation was introduced, but does this limitation still exist nowadays?
Why
MaybeNullWhen(false)
instead ofNotNullWhen(true)
Since this is the way generic arguments are annotated, see here and search for "annotate generic out arguments".
You can only use
NotNullWhen
in case you have concrete types.
You can use NotNullWhen with generics just fine :-)
Are you sure that doesn't fix things here?
Why
MaybeNullWhen(false)
instead ofNotNullWhen(true)
Since this is the way generic arguments are annotated, see here and search for "annotate generic out arguments". You can only use
NotNullWhen
in case you have concrete types.You can use NotNullWhen with generics just fine :-)
Are you sure that doesn't fix things here?
Yeah, NotNullWhen
seems to work with generics nowadays, and it even fixes the problem here. But I remember it didn't in the past. And Microsoft still uses MaybeNullWhen
with generics, e.g. in class Dictionary<,>
.
Can somebody from the language designers confirm that NotNullWhen
is okay with generics now?
Interesting:
Both attributes appear to be allowed with generics.
Can somebody from the language designers
I am one of hte lang designers :D
NotNullWhen
is allowed but not valid here. IValueHolder<string?>
is allowed to return null
as a value, and that value fails to adhere to a NotNullWhen
constraint. If you further have where T : notnull
, it would become valid, but the original code does not constrain T
in that manner.
Note that if you do have a notnull
constraint, NotNullWhen(true)
would be the preferred attribute here. MaybeNullWhen(false)
is only required for correctness in cases like this involving unconstrained generics.
@sharwell I see, and now I understand why they used both attributes in ConditionalWeakTable
. The fact that the behaviour appears to be a little "awkward" as soon as covariance comes into play, seems to be by design and can be overcome by using var
instead of the concrete type.
I feel there is some bug here because:
<string>
type argument to TryGetValue
var
as implicitly nullable-annotated, so it doesn't make sense to me that var
is not contributing its nullability to type inference while string?
is contributing its nullability.Curious what you think @jcouv.
@jcouv?
The covariance of IValueHolder<out T>
and the out string? value
argument cause the compiler to infer TryGetValue<string?>(this IValueHolder<string?> holder, [MaybeNullWhen(false)] out string? value)
. This assigns a maybe-null value in the when-true case, so the warning is expected.
Although inconvenient, this all seems by-design as far as I can tell.
As @RikkiGibson pointed out above, this can be fixed by explicitly providing type arguments TryGetValue<string>(...)
or using var
(which doesn't contribute a type or nullability).
As @peter-perot pointed out, the signature could also be updated to bool TryGetValue<T>(this IValueHolder<T> holder, [NotNullWhen(true)] out T? value)
(although he points out this goes against the .NET API guideline).
I don't see any problem with the [NotNullWhen(true)] out T?
pattern. I could be missing something, but I think we should update the guideline.
Update: the problem with [NotNullWhen(true)]
is when a nullable T
is provided... That explains why the guideline is as it is.
There is indeed a problem with using [NotNullWhen(true)] out T? value
. If T
is substituted with string?
then the nullability analysis would be wrong. So I think the .NET API guideline is correct after all.
In general, the nullable annotation of the variable we are assigning to, does not affect the analyzed nullability of the thing we are assigning. e.g. in string? x = M(...)
or string x = M(...)
or var x = M(...)
the nullable flow state of x
is always the same, it is whatever we got out of M
(assuming no weird conversions stuff is in play).
I am tempted to think of an out argument as being the same thing. It doesn't make sense to me that an out argument allowing null or not, affects whether we think null is actually being written to it. The only exception is when nothing else is providing a bounds for a type parameter, and the out argument's annotation is really the user telling us what nullability is going into the variable.
That said, I am not sure if there is a simple and sensible way to encode this idea into the type argument inference rules, and in a way which justifies the breaking(?) change and time investment.
It does disturb me a bit that the user is sort of in a corner here. I assume a warning will occur if out string
explicitly used, and it feels like using var
here is somewhat a "trick", to let the variable be nullable without having to contribute a nullable bounds to the type inference.
Version Used: VS 17.8.7 (Compiler Version 4.8.0-7.23572.1, Language Version 12.0)
Steps to Reproduce:
Try to compile the code below (see also sharplab).
Diagnostic Id:
CS8600
Expected Behavior:
Should compile without warning.
Actual Behavior:
Issues a warning.
Observation:
If you remove the covariance from the interface definition like
..., the code compiles without warning.