dotnet / runtime

.NET is a cross-platform runtime for cloud, mobile, desktop, and IoT apps.
https://docs.microsoft.com/dotnet/core/
MIT License
15.17k stars 4.72k forks source link

Proposal: ExperimentalAttribute #31542

Closed jodydonetti closed 3 years ago

jodydonetti commented 4 years ago

The scene

Sometimes there's the need to create (for personal/internal use) and maybe even release (for public consumption) some code that is useful in and on itself, but it's just not ready to be set in stone regarding its shape, api surface area, namespaces and so on. Maybe we are talking about an early stage pre-v1.0 release so the expectations about stability of code/design/api may be clear, but maybe they are in a new release that is adding new capabilities along the way.

The need

There should be a clear way of warning developers that the specific feature they are using is marked as experimental, meaning that it can change shape, break or disappear at any time in the near future, until it will get eventually stabilized.

The solution

The new ExperimentalAttribute can clearly signal all that has been said above, with the optional custom message already familiar from the ObsoleteAttribute for which this new attribute is kind of in the opposite direction, time-wise: "obsolete" meaning "this thing had its time, don't use it anymore" while "experimental" meaning "it's not yet ready for prime time").

namespace System
{
    [AttributeUsage(AttributeTargets.All, AllowMultiple = false, Inherited = true)]
    public sealed class ExperimentalAttribute : Attribute
    {
        public ExperimentalAttribute()
        public ExperimentalAttribute(string message)
        public string Message { get; }
    }
}

The output

It should be possible to configure what output it would produce (info/warning/error), either per project/solution or globally. Probably it would also be a good idea to be able to configure that via an editorconfig setting like other rules already existing:

# Experimental features
dotnet_use_code_with_experimental_attribute = false : error

The above example means "do not allow using code marked as experimental, and threat such usage as an error".

The default should probably be = true : warning meaning experimental code can be used, but they should output a warning.

exp-attr-output

Other options considered

A way to warn developers about code that may be problematic to use is the ObsoleteAttribute : the problem is that it is kind of heavy handed as already said by others, and the wording "obsolete" would probably be confusing for something "new".

There's also the pre-release story inside Nuget, but there the scope is package-wide so it's not granular enough, not suitable for specific classes/interfaces/methods/props/etc being added to an existing codebase: as a thought exercise imagine having the "obsolete" concept but only expressable package-wide for an entire nuget packge, that wouldn't cut it. Additionally it's something available only in nuget packages, and only usable as a filter while adding new ones. Instead it should be possible to open an existing project, set a line in editorconfig (or the options panel or similar), compile, and see if anything experimental is being used. Also, it should be usable in codebases that may not be published as nuget packages but referenced in other ways only inside the company or team, and it would still be useful to communicate with other team members about the potential breakage that may ensue by using some pieces of code.

Clockwork-Muse commented 4 years ago

A way to warn developers about code that may be problematic to use is the ObsoleteAttribute: the problem is that it is kind of heavy handed as already said by others, and the wording "obsolete" would probably be confusing for something "new".

ObsoleteAttribute is there to allow developers to move away from that particular API, so reusing it this way is definitely counter to intention. It's primarily for the case where you want to enable consumers to take new versions of existing packages without them needing to (immediately) change any code.

There's also the pre-release story inside Nuget, but there the scope is package-wide so it's not granular enough, not suitable for specific classes/interfaces/methods/props/etc being added to an existing codebase

... The problem is that it feels like everything new in a pre-release package should be marked with any supposed ExperimentalAttribute; after all, until you publish the final package, the entire API is "not final". There might be some argument to a "confidence" level of some sort, but that seems better handled at the documentation level.

Really, probably the better way to handle incoming APIs in a NuGet package would be to improve the tooling such that if you're including a pre-release package you get warnings on any new APIs.

Additionally it's something available only in nuget packages, and only usable as a filter while adding new ones.

I'm generally a firm believer that even single-consumer libraries should generally be packaged up separately - not due to the forlorn hope they'll be reused, but rather to manage dependencies and separating changes.

Also, it should be usable in codebases that may not be published as nuget packages but referenced in other ways only inside the company or team, and it would still be useful to communicate with other team members about the potential breakage that may ensue by using some pieces of code.

You can (and should) have private NuGet servers. Pre-release packages would still solve this issue.

jodydonetti commented 4 years ago

ObsoleteAttribute is there to allow developers to move away from that particular API, so reusing it this way is definitely counter to intention. It's primarily for the case where you want to enable consumers to take new versions of existing packages without them needing to (immediately) change any code.

Yep, that was my point. I was just pointing out explicitly that was something already in the framework that can, in genral, "warn" developers about code usage. For example I've used that in the past (only internally in my team) as a signal for other devs with a custom message along the lines of "EXPERIMENTAL: better not to use this yet" but, although it served the purpose effectively as a warning sign, it always smelled badly and when thinking at a framework level feature it is surely something not to consider seriously.

There might be some argument to a "confidence" level of some sort, but that seems better handled at the documentation level.

On this I certainly agree.

Really, probably the better way to handle incoming APIs in a NuGet package would be to improve the tooling such that if you're including a pre-release package you get warnings on any new APIs.

I thought about that, but here's one of the problems I see: if you reference a package marked as pre-release and there's no special attributes to identify "new stuff" the tooling can only warn about the usage of everything in that package, and that would render this path basically useless. Also, if with "on any new APIs" you mean a warning only for stuff that has been added, the question became how would the tooling select "just the new stuff"? I can't think about a sort of automatic diffing of the api surface area between package versions because it would be too complicated, brittle and unreliable and because it would be even hard to just determine what the "base" version for the diffing should be (the previous version? the last non pre-relase one?). Even more, new stuff not experimental would be marked as a warning where that may not be the case.

Instead, if code authors would simply mark experimental stuff as experimental just like they already do with the obsolete ones, all these problems would go away.

jodydonetti commented 4 years ago

Also, it would be interesting to hear from F# people ( @cartermp ? others?) who I just discovered have had an ExperimentalAttribute from some time to see how they are using it, if they already encountered some problems, possible design flaws, how the tooling around that is or should be, etc.

Clockwork-Muse commented 4 years ago

Base version would most likely have to be previous version (in semantic versioning) non pre-release (maaaaayyybe overrideable). Diffing a reflection-gleaned tree should be relatively painless (and trivially cacheable).

Even more, new stuff not experimental would be marked as a warning where that may not be the case.

That's kinda my point. To me, everything new in a prerelease is experimental (subject to some confidence level). You're using a prerelease package, all bets are off.

cartermp commented 4 years ago

@njy although we have that attribute defined in FSharp.Core, it's rarely used in practice. We don't use it for F# itself.

jodydonetti commented 4 years ago

Base version would most likely have to be previous version (in semantic versioning) non pre-release (maaaaayyybe overrideable). Diffing a reflection-gleaned tree should be relatively painless (and trivially cacheable).

I have some doubts that this would work in a reasonably predictable/stable way and without hurting perf/user experience. Having said so, I certainly get your pov.

To me, everything new in a prerelease is experimental (subject to some confidence level). You're using a prerelease package, all bets are off.

Just thinking out loud here, but wouldn't that include minor "new" things like new overrides of existing methods while missing things that are changed internally (impl) without a change in the method signature?

I mean, the first one (catching new overrides of existing methods) can even be an interesting concept, like suppose you where previously calling a method with a non-passed optional param, and now that same line of code would match a new override or vice versa. It's something to think about.

Also, something like this could be in theory (if feasible/stable/performant as said above) be just a code analyzer.

Any thoughts on this?

Clockwork-Muse commented 4 years ago

Just thinking out loud here, but wouldn't that include minor "new" things like new overrides of existing methods while missing things that are changed internally (impl) without a change in the method signature?

... Why would you ever want to care about the internal implementation changing (assuming the same tests pass)? Anything that did this should explicitly include only publicly-visible API surfaces.

I mean, the first one (catching new overrides of existing methods) can even be an interesting concept, like suppose you where previously calling a method with a non-passed optional param, and now that same line of code would match a new override or vice versa.

Overload resolution prefers methods with fewer parameters in the face of optional arguments, probably for exactly this reason, so adding a new method with optional arguments would just be "standard new".
Most library consumers (and existing automated analyzers) are going to look at you funny if you add an overload that only differs in removing an optional parameter, unless you are also mark the original deprecated. At which point normal deprecated rules apply. If the behavior differs significantly, you would be tarred and feathered (a new method name would be appropriate, most likely).

Also, something like this could be in theory (if feasible/stable/performant as said above) be just a code analyzer.

... I don't know if analyzers get access to package versions, but potentially yes.

jodydonetti commented 4 years ago

Overload resolution prefers methods with fewer parameters in the face of optional arguments, probably for exactly this reason, so adding a new method with optional arguments would just be "standard new".

It was just an example: if that is in fact the precedence for overload resolution then just use the opposite example, like there was a 3 params method with the last one optional, and now there's also a new 2 params overload. But again, that particular one was just an example, the first that popped up in my mind and not necessarily a real use case. I'll try to come up with a better one.

sharwell commented 4 years ago

I like the idea of [Preliminary]. One of my older breaking changes documents used <preliminary/> in a documentation comment and explained it as follows (the attribute would work the same way but have a different form):

This library may include types and/or members which are designated as preliminary features. The preliminary feature designation is indicated by a clear note at the top of the associated documentation page. In the library source code, preliminary features are indicated by including the <preliminary/> element to the XML documentation comment for the feature. While preliminary features are much more "flexible" during the release cycle of the library, certain rules do apply in order to ensure the stronger rules provided for stable features will not be violated by a change to a preliminary feature. The following list includes examples of these rules, but other rules may be imposed by basic logical constraints. The API elements referred to in the following list are assumed to be restricted to the publicly-exposed API of the library. The terms "member" and "members" refer to any method, property, or event.

  • A member may only refer to a preliminary type in its signature if it is also marked preliminary.
  • An interface may only include a preliminary member if it is also marked preliminary.
  • An interface may only extend a preliminary interface if it is also marked preliminary.
  • A class may only include a preliminary abstract member if either it is also marked preliminary, or all constructors for the class are marked internal. This restriction also applies to abstract classes which do not implement an abstract member declared in a base class.
tannergooding commented 4 years ago

I'm not sure how this differs from or improves upon what we are already doing today.

APIs don't go into CoreFX until officially reviewed/approved. APIs in CoreFX are considered "preview" until we RTM and ship them as stable on NuGet (this is designated by -preview, -alpha, -beta, etc monikers).

We sometimes ship out of band packages as .Experimental alongside a given release. These are not "stable" and subject to change in the future, etc.

Clockwork-Muse commented 4 years ago

@tannergooding - I believe the proposal is not necessarily that we'd use it here, but that it would be included here for library (and tool) writers to use it themselves.

Although my position is otherwise the same as yours, I think: "unfinal packages are subject to change at any time".

sharwell commented 4 years ago

Without analyzer support, it's extremely easy to accidentally use a preliminary API. It's hit Roslyn users a few times. Regardless of the manner in which preliminary APIs are identified, it needs to be possible to create an analyzer that knows how to flag uses of them.

jodydonetti commented 4 years ago

I like the idea of [Preliminary]. One of my older breaking changes documents used <preliminary/> in a documentation comment and explained it as follows (the attribute would work the same way but have a different form)

Yep, that's the general concept. Regarding the name either Preliminary, Experimental or something else would be ok, I came up with experimental and later discovered it was what already used in F# so it's probably more uniform.

jodydonetti commented 4 years ago

I'm not sure how this differs from or improves upon what we are already doing today.

This would be more granular, for example. I don't know if it's the right way, but I had this need in the past and thought about a way to have something available in the framework to deal with those situations, just like the [Obsolete] attribute that helped me a lot in the past.

APIs don't go into CoreFX until officially reviewed/approved. APIs in CoreFX are considered "preview" until we RTM and ship them as stable on NuGet (this is designated by -preview, -alpha, -beta, etc monikers).

As correctly said by @Clockwork-Muse I was not necessarily proposing the use in CoreFX itself (maybe, maybe not, it's not up to me): I created the issue here because that is where it should be defined to be broadly usable, just like the [Obsolete] attribute.

We sometimes ship out of band packages as .Experimental alongside a given release. These are not "stable" and subject to change in the future, etc.

Yep, I saw that, but that requires different packages, probably different namespaces (eg: https://github.com/dotnet/machinelearning/blob/master/src/Microsoft.ML.Experimental/MLContextExtensions.cs ) that should be later changed everywhere it has been referenced, even if the final shape of the API would remain the same.

If you think about it, you would still have the ability to have separate assemblies/packages if that is needed, but with the added bonus of warning developers about experimental stuff also inside already existing packages without the need to have new namespaces + later refactoring, and to warn them in granular way only about specific functionalities/classes/methods/props/etc.

Also, I know it's not a "just add an ExperimentalAttribute class and you're done" kind of effort because of tooling support, documentation, etc: having said so, effort wise it may not be such a huge undertaking to have some initial results. The first step may be to create the attribute and generate a warning/error if needed, and additional features/tooling around such attribute may be added in the future on top of it, if its usage would grow over time and the community appreciate it.

Does it make sense?

jodydonetti commented 4 years ago

@Clockwork-Muse I thought a little bit about the analyzer approach or the likes. Here's what it should probably do:

  1. look at all the pkg deps available (also, sub deps?)
  2. filter out only the ones with a preview/alpha/beta monikers
  3. contact the pkg source (nuget + others) and, for every one, get a list of all the previous versions (is this possible?) to get the newest previous stable version, or do the same via an hypothetical specialized call
  4. if one is found for each dependency, analyze all the types inside each assembly and, for each one, see if there has been a change in respect to the shape of the api
  5. for each change, track its usage inside the current proj/sln
  6. obviously more edge cases, special situations and whatnot

In general it would be cool, but I suspect it would not be an easily obtainable feature, on top of being a path full of hard situations to go over.

Thoughts?

Clockwork-Muse commented 4 years ago
  1. look at all the pkg deps available (also, sub deps?)
  2. filter out only the ones with a preview/alpha/beta monikers
  3. contact the pkg source (nuget + others) and, for every one, get a list of all the previous versions (is this possible?) to get the newest previous stable version, or do the same via an hypothetical specialized call
  4. if one is found for each dependency, analyze all the types inside each assembly and, for each one, see if there has been a change in respect to the shape of the api
  5. for each change, track its usage inside the current proj/sln
  6. obviously more edge cases, special situations and whatnot

This is what I'm thinking, yes.

  1. I haven't looked into it, so I don't know if analyzers get access to package/solution information (as opposed to data from a loaded and referenced dll) - most of the info from a quick look is about writing an analyzer that we include with our own package. We don't care about internal dependencies - public transitive ones also become our dependencies, though, if we use them.
  2. If you can get package info, this is trivial
  3. There's an explicit API call for that, and you can even exclude pre-release packages.
  4. Yes. Since we don't need to actually load/execute the code inside them, you'd probably only need to load into a reflection-only context. You would diff the trees/compare all symbols, and store/cache the difference only. Almost certainly this can be computed on pre-release package load, once.
  5. The analyzer checks if any symbol used is in the cached diff tree, and marks it, and marks it somehow.

... but with the added bonus of warning developers about experimental stuff also inside already existing packages ...

If you publish a full, non pre-release, package with an "experimental" sub-section, you will earn my ire.
This sort of behavior makes some small sense when distribution is difficult and often requires that consumers build source on different machines, like in traditional C++. It makes absolutely no sense when you have dependency management tools like NuGet. Make a separate package for just the new stuff, or make a pre-release package with experimental stuff added.

Note that namespaces in C# are not connected to the package name (which sometimes drives me nuts, having come from Java), so separate namespaces are not required (and in fact several packages in corefx make use of this). You don't necessarily need separate packages (just separate versions), although doing so can help manage things.

sharwell commented 4 years ago

@njy The approach of looking at monikers does not address cases where you want to ship a stable package with one or more APIs still marked as preliminary. It also doesn't allow the granularity to be controlled (e.g. the use of a preliminary API would force the entire assembly to be preliminary). NuGet already provides direct validation for prerelease packages at this granularity, so I don't see the value in duplicating it elsewhere.

We sometimes ship out of band packages as .Experimental alongside a given release.

It is not possible to add preliminary members to existing APIs using this approach.

Also, I know it's not a "just add an ExperimentalAttribute class and you're done" kind of effort because of tooling support, documentation, etc: having said so, effort wise it may not be such a huge undertaking to have some initial results.

The BannedApiAnalyzers package could be updated with new analyzers for this scenario. It already contains shared infrastructure for examining code to ensure disallowed APIs (for an extensible definition of disallowed) are not used.

It would not be difficult to expand the analysis to cover the rules for applying [Preliminary] to code, and update PublicApiAnalyzers to clearly indicate that these APIs are not considered "shipped".

jodydonetti commented 4 years ago

Note that namespaces in C# are not connected to the package name [...], so separate namespaces are not required

Of course it is not required: my observation was that if it has been decided to use a different namespace to better mark the experimental state of the thing it was because of a need, and that need may be better covered by the proposed attribute, that would also not require a later forced refactoring. But it's just a possibility to reason about of course.

jodydonetti commented 4 years ago

@njy The approach of looking at monikers does not address cases where you want to ship a stable package with one or more APIs still marked as preliminary. It also doesn't allow the granularity to be controlled (e.g. the use of a preliminary API would force the entire assembly to be preliminary). NuGet already provides direct validation for prerelease packages at this granularity, so I don't see the value in duplicating it elsewhere.

Yep I agree, as I said initially that was my point and the reason for this proposal. The point now, methinks, is to determine if this is a scenario that we want to endorse and support or not because there are better ways to do it. My observations based on personal experience was that people in fact do sometimes release stable packages with some experimental parts, but without the ability to clearly mark in a machine readable way parts of them as experimental. Some seem to agree, some don't. It would be interesting to gater more opinions on this.

The BannedApiAnalyzers package could be updated with new analyzers for this scenario. It already contains shared infrastructure for examining code to ensure disallowed APIs (for an extensible definition of disallowed) are not used.

It would not be difficult to expand the analysis to cover the rules for applying [Preliminary] to code, and update PublicApiAnalyzers to clearly indicate that these APIs are not considered "shipped".

I didn't know about those analyzers, I'll take a look as soon as I can, thanks!

am11 commented 4 years ago

Roslyn seems to support one ExperimentalAttribute from UWP's Windows.Foundation.Metadata namespace: https://github.com/dotnet/roslyn/blob/10ccd1b/src/Compilers/Core/Portable/Symbols/Attributes/AttributeDescription.cs#L550

This doc highlights comparison with Obsolete and Deprecated attributes: https://github.com/dotnet/roslyn/blob/10ccd1b/docs/features/ExperimentalAttribute.md#obsoleteattribute-and-deprecatedattribute

Some related discussion in the original PR: https://github.com/dotnet/roslyn/pull/18708.

jskeet commented 4 years ago

My observations based on personal experience was that people in fact do sometimes release stable packages with some experimental parts, but without the ability to clearly mark in a machine readable way parts of them as experimental.

Yes, I've seen this often, and wanted to do it too.

In particular, I think it makes sense to differentiate between "this is an experimental API surface that may go away" and "this is an experimental feature that may blow up in your face" - I wouldn't want to include the latter in any GA libraries, but the former is really useful if it's opt-in. If I'm a consumer writing production code, I want the certainty of the code working, but I may want to be able to opt into new features where the API surface is still being explored.

One issue I've only just thought about with this: it really messes with transitive dependencies and compatibility. If I write a library X which depends on the experimental API surface in Z (even behind my own experimental API attribute) but then an application depends on my library and another library Y which depends on a later version of Z (within the same major version) where the API has been broken, then we're in problems.

That might be enough to scupper the whole idea, unfortunately :(

jodydonetti commented 4 years ago

Yes, I've seen this often, and wanted to do it too.

I'm glad I'm not the only one thinking this 😅

In particular, I think it makes sense to differentiate between "this is an experimental API surface that may go away" and "this is an experimental feature that may blow up in your face" - I wouldn't want to include the latter in any GA libraries, but the former is really useful if it's opt-in. If I'm a consumer writing production code, I want the certainty of the code working, but I may want to be able to opt into new features where the API surface is still being explored.

Ok I think I understand what you are saying, I wonder though it that is not too specific. In case that makes sense, an additional flag like bool Unstable may make sense, something akin to the flag bool Error in the ObsoleteAttribute. Of course another way would be to directly have an UnstableAttribute, but I don't know which one would be better, both in theory and in practice.

One issue I've only just thought about with this: it really messes with transitive dependencies and compatibility. If I write a library X which depends on the experimental API surface in Z (even behind my own experimental API attribute) but then an application depends on my library and another library Y which depends on a later version of Z (within the same major version) where the API has been broken, then we're in problems.

The same could be said about the usage of the ObsoleteAttribute: of course if something is marked as obsolete it doesn't mean it will not work, so continuing this analogy and linking back to the dicotomy between experimental-but-working VS experimental-and-unstable you talked about above, the thinking may be that if a transitive dependency used the former, that would not be a problem per se, while the latter may make your own code unstable because of unstable transitive dependencies. But, and I'm thinking out loud here, could that also be said about regions of unsafe code? All in all I don't think it's something that we can prevent 100% and that right now is not happening nonetheless.

Does it make sense? Opinions?

jskeet commented 4 years ago

The same could be said about the usage of the ObsoleteAttribute

I don't think so. If something is introduced in 1.0 and made obsolete in 1.1, you should still include it in 1.2. Removing it would require a major version bump. Making something obsolete can cause compile-time breakage of course if you have warnings-as-errors turned on, but I think that's generally accepted. But it's not a breaking binary compatibility change.

If something were introduced as experimental in 1.1 and then removed entirely in 1.2, that would be very much a binary breaking change.

(I don't think unsafe is relevant here at all - unsafe code is unsafe in ways that are entirely orthogonal to API versioning.)

jodydonetti commented 4 years ago

Ah! I had not thought about binary compatibility, you are right about that. I was trying (maybe badly) to make a parallel between overall expectations.

michael-hawker commented 3 years ago

Yeah, we've been wanting this too. We ship a library of various helpers, so sometimes we have something when we're ready to ship that we're still not sure about. Rather than trying to maintain a branch or remove it entirely from the release bits, we can keep it in, but mark it as experimental. Then at the next release version we can unmark it, or break it in a major version still.

The dependency issue is a real one too. Even if your whole package is marked preview, it may be used in a 'stable' parent package. The developer using that package can still just reference the inner package dependency without knowing about it.

Related: https://github.com/NuGet/Home/issues/10375

Sergio0694 commented 3 years ago

This proposal could probably be considered as superseded by the new [RequiresPreviewFeatures] attribute that is available in .NET 6, which effectively provides the same support to mark individual APIs as "being experimental" and thus requiring consumers to add <EnablePreviewFeatures> to their .csproj file. You can see the full approved design docs here: https://github.com/dotnet/designs/blob/main/accepted/2021/preview-features/preview-features.md#api-analyzer.

The only notable difference with the attribute proposed here (unless I'm missing something) is the lack of the ability to also specify a message (currently there is just a parameterless constructor), but there is already a proposal for that as well: see https://github.com/dotnet/runtime/issues/56498.

terrajobst commented 3 years ago

@Sergio0694 I agree. Let me read through this again and then close

jodydonetti commented 3 years ago

As the author of this proposal I would say, at a first look, definitely yes! Tomorrow morning I'll take a better look at the new proposal, but it seems to me it gets the same results so I'd close this. Since he was in this discussion I would really like @jskeet 's opinion, too. Thanks for the update and the feature itself!

terrajobst commented 3 years ago

BTW, a more detailed design document for [RequiresPreviewFeatures] is here:

https://github.com/dotnet/designs/blob/main/accepted/2021/preview-features/preview-features.md

The key difference of [RequiresPreviewFeatures] and [Experimental] seem to be:

  1. [RequiresPreviewFeatures] is an error by default (but we're in the mids of re-reviewing this to ensure this doesn't negatively affect the inner loop).
  2. [RequiresPreviewFeatures] by default is viral. IOW, you can only use it from places also marked as [RequiresPreviewFeatures]. However, developers can use #pragma-suppressions to tweak this behavior. Basically, the goal is to make it very difficult for consumers to accidentally depend on preview functionality, be that directly or indirectly via another library.
  3. [RequiresPreviewFeatures] is mostly meant for runtime/framework provided preview functionality. However, I don't think there is a concern if library authors use the same mechanism for their own API surface.

@jodydonetti I'm closing this as I believe the [RequiresPreviewFeatures] to be a superset of the feature you requested. Please let me know if you disagree.

jodydonetti commented 3 years ago

Hi @terrajobst , I finally took the time to read your entire design document.

The key difference of [RequiresPreviewFeatures] and [Experimental] seem to be:

  1. [RequiresPreviewFeatures] is an error by default (but we're in the mids of re-reviewing this to ensure this doesn't negatively affect the inner loop).

In both cases (warning vs error) I don't see this as an issue.

  1. [RequiresPreviewFeatures] by default is viral. IOW, you can only use it from places also marked as [RequiresPreviewFeatures]. However, developers can use #pragma-suppressions to tweak this behavior. Basically, the goal is to make it very difficult for consumers to accidentally depend on preview functionality, be that directly or indirectly via another library. [...] <GenerateRequiresPreviewFeaturesAttribute>

These 2 features are really chef's kiss, and remove a lot of the potential friction to adoption.

One additional thing I'm thinking about is that while #pragma would suppress specific usage instances, something like [assembly: SuppressMessage("Performance", "HAA0101:Array allocation for params parameter", Justification = "<Pending>")] in the GlobalSuppressions.cs file would mark a preview feature usage once and for all, in a project. I don't know enough about the .NET compiler infrastructure to know if the 2 features are automatically available or they should be explicitly coded, so I don't know if there would be an extra burden to support it, but I thought it is something to consider.

For example, in my "quantizer" scenario (see below), I would like to say "ok I know the quantizer thing is in preview, I got it, just let me use it in this project" without having to mark every single usage every single time.

  1. [RequiresPreviewFeatures] is mostly meant for runtime/framework provided preview functionality. However, I don't think there is a concern if library authors use the same mechanism for their own API surface. [...] Building a library that offers preview features [...] Can 3rd parties introduce their own preview APIs? In this design the system has no way to prevent this and it will just work.

As a library author this is very important, and is the reason I opened this [Experimental] attribute proposal. With your green light about using it for non runtime/framework code, too I'm all good.

I would only like to add one, seemingly minor but practically important, thing. Seeing that the original need that drove this "preview" feature design was mainly runtime + seeing all the extra work done in your design document in that regard (like maybe adding compiler support, multi targeting, etc), I think MS should be super clear if this will be officially allowed (and, so, supported) for lib authors to also use, or if it's just a "now it should work, can't guarantee in the future 🤷" kind of thing. I'm saying this because I can envision people like me going all in only to discover later on that, in fact, using it for libraries would go against MS vision for it, causing either MS not to be able to change it as needed to avoid breaking libs or doing it and breaking lib authors. Just wanted to cleary double check this, since for MS may even be a burden in the future.

@jodydonetti I'm closing this as I believe the [RequiresPreviewFeatures] to be a superset of the feature you requested. Please let me know if you disagree.

In general I agree: for all intent and purposes it seems to me it fits the needs I envisioned. But just to be super sure I'd like to draw a practical usage scenario for you to confirm (or not), for both sides of the spectrum (lib author/consumer).

Scenario: as a lib author

As the author of my Xyz library I decide to experiment with a new core feature (let's say, adding a Quantizer) that cannot be modeled as a separate "plugin" package released independently, but that also will take a lot of time to explore. I'd like to be able to let my users play with it and give me feedback, while also releasing official versions of the lib with other new finalized features, because they are needed by the community and are ready to be used in production, with tests, support and whatnot. Enter the [RequiresPreviewFeatures] attribute. So I mark the Quantizer prop in my XyzEngine with the attribute. I also mark the IXyzQuantizer interface that defines how a quantizer should be modeled with the attribute, on top of the first 2 impl of that (let's say FastXyzQuantizer and PreciseXyZQuantizer). I also add <EnablePreviewFeatures>True</EnablePreviewFeatures> and the <GenerateRequiresPreviewFeaturesAttribute>False</GenerateRequiresPreviewFeaturesAttribute> in the csproj file. Finally I release a new version of my lib, with some great new officially supported features but also these preview bits.

Scenario: as a lib consumer

As a lib consumer of the Xyz package I see there's a new version which contains a feature I really need, so I updated the package ref and start using the new feature. Everything is fine, no warnings or errors, and I'm happy. Then while coding something I discover, thanks to Intellisense, that there's a new Quantizer feature: what!? Cool, let's try it! So then - and only then - an error (or warning, depending on how you finalize it) tells me that now I'm using a preview feature, so I should be careful: if I want to keep using it I should explicitly add a prop to my csproj to enable the usage of preview features, otherwise I can just stop using it.

My question is: is this scenario correct?

Design Document Notes

A couple of notes on the design document, since it is not possible to comment there.

Why did we use the term preview?

I'm glad the term initially envisioned was the same 😄 FWIW I also agree that in the realm of the runtime itself "preview" seems to make more sense. Also, even though normally the "preview" term is used with the runtime, I can easily see the OSS community adopting it instead of "experimental", so that there would be a more cohesive terminology in the entire .NET ecosystem.

Meaning of property in multi-targeted projects The reason for this is that experimental features aren't expected to be backwards compatible. That's the entire point of making them experimental in the first place. In practice this means that a multi-targeted project will only be able to use experimental features in a single TFM, specifically the current one.

I understand the "not backwards compatible" reasoning, but I envision 2 potential problems with this one.

The first is that while .NET 6 is clearly the next major version after .NET 5, so .NET 6 is "the last/current", the same cannot be said for example for .NET Standard VS .NET. It's pretty common to develop a lib for .NET Standard (typically nowadays .NET Standard 2.0 is the one with the major reach) and multi-target to .NET 5 to add some extra features/overloads or change some impl details. What would happen there? Which TFM is the "last/current" one?

The second problem is: what if a big preview feature like the ones you mentioned takes multi-major releases to finalize? Let's say I add a preview feature in the .NET 5 version of a lib. As soon as .NET 6 comes out and I add multi-targeting for .NET 5+6, every new build of that lib from there on would not be "preview" anymore for a .NET 5 target. I don't know if this makes sense, maybe I've missed something along the way.

Thanks for your time!

jodydonetti commented 3 years ago

Hi @terrajobst , any chance you've got some time for this?

terrajobst commented 3 years ago

One additional thing I'm thinking about is that while #pragma would suppress specific usage instances, something like [assembly: SuppressMessage("Performance", "HAA0101:Array allocation for params parameter", Justification = "<Pending>")] in the GlobalSuppressions.cs file would mark a preview feature usage once and for all, in a project.

AFAIK Roslyn's infrastructure handles this transparently for all analyzers already. You an suppress messages in various way, #pragma, <NoWarn> in the project file, and SuppressMessageAttribute. They should all work. For the preview analyzer specifically, the idea would be that users generally allow this by <EnablePreviewFeatures> in the project file or by marking the assembly/type/method with [RequiresPreviewFeatures]. Suppressing is generally not recommended.

Seeing that the original need that drove this "preview" feature design was mainly runtime + seeing all the extra work done in your design document in that regard (like maybe adding compiler support, multi targeting, etc), I think MS should be super clear if this will be officially allowed (and, so, supported) for lib authors to also use, or if it's just a "now it should work, can't guarantee in the future 🤷" kind of thing.

The document calls this out.

The first is that while .NET 6 is clearly the next major version after .NET 5, so .NET 6 is "the last/current", the same cannot be said for example for .NET Standard VS .NET. It's pretty common to develop a lib for .NET Standard (typically nowadays .NET Standard 2.0 is the one with the major reach) and multi-target to .NET 5 to add some extra features/overloads or change some impl details. What would happen there? Which TFM is the "last/current" one?

I think that's fine. It just means that if a project's package graph uses different versions of your package, if you made a breaking change then there might not be a single version that works for everyone in the graph. But that's the same as with any breaking change.

The second problem is: what if a big preview feature like the ones you mentioned takes multi-major releases to finalize?

That's also fine. It just means that there are multiple potential bands for breaking changes.

Let's say I add a preview feature in the .NET 5 version of a lib. As soon as .NET 6 comes out and I add multi-targeting for .NET 5+6, every new build of that lib from there on would not be "preview" anymore for a .NET 5 target.

Not sure I understand. Why would that be?

jodydonetti commented 3 years ago

AFAIK Roslyn's infrastructure handles this transparently for all analyzers already. You an suppress messages in various way, #pragma, <NoWarn> in the project file, and SuppressMessageAttribute. They should all work.

Awesome, I hoped that was the case but wanted to double check.

For the preview analyzer specifically, the idea would be that users generally allow this by <EnablePreviewFeatures> in the project file or by marking the assembly/type/method with [RequiresPreviewFeatures]. Suppressing is generally not recommended.

Understood, and makes sense to me. I was thinking about rare cases of suppressions, like small portions of code doing something with a preview feature inside a try/catch+log block to try new features, with a fallback to non-preview features in case of problems. I mean it's very specific, but was wondering about that.

The document calls this out.

Yeah I read that, it was the wording "has no way to prevent this" that made me feel like it was just a byproduct of the design, and wanted to double check that there was also the intention to fully support it for 3rd parties.

I think that's fine. It just means that if a project's package graph uses different versions of your package, if you made a breaking change then there might not be a single version that works for everyone in the graph. But that's the same as with any breaking change. [...] That's also fine. It just means that there are multiple potential bands for breaking changes. [...] Not sure I understand. Why would that be?

So it seems to me I got this wrong, maybe? Let's make a practical example, otherwise it would be too convoluted for me to explain clearly.

I create a lib with only 1 TFM: net5.0. I use some preview features, so I add <EnablePreviewFeatures>True</EnablePreviewFeatures> in the csproj, release the package and everything's fine. Months later .NET 6 is released, and I notice some nice additional things to use, so I'll enable multi targeting by adding the net6.0 TFM and, for example, use those new things for .NET 6 with a conditional compilation flag. I also added a couple of small, "normal" features that do not require preview stuff or anything not normal. So then I would like rebuild and package everything, for both my .NET 5 and .NET 6 users.

But the design document says "In practice this means that a multi-targeted project will only be able to use experimental features in a single TFM, specifically the current one" . So, if I got the meaning right, it means that when building for .NET 5 I would not be able to compile, because it is possible to use preview features only when compiling against .NET 6, so I cannot compile with multi-target anymore and I would have to - I suppose - remove the usage of preview features when targeting .NET 5.

jodydonetti commented 3 years ago

While we are here, I'm curious: have this 2019 proposal inspired in any way the introduction of Preview Features support 🙂 ?

terrajobst commented 3 years ago

Unlocked per request from @jodydonetti

terrajobst commented 3 years ago

I create a lib with only 1 TFM: net5.0. I use some preview features, so I add True in the csproj, release the package and everything's fine. Months later .NET 6 is released

First, let me state that there shouldn't be any expectation that the binary for .NET 5 will just work on .NET 6 -- because it depends on preview features. We could have reacted to feedback and made changes to the feature before we shipped it in .NET 6. If anything, you should expect that it very likely won't work because we absolutely intend to make breaking changes in preview features.

and I notice some nice additional things to use, so I'll enable multi targeting by adding the net6.0 TFM and, for example, use those new things for .NET 6 with a conditional compilation flag. I also added a couple of small, "normal" features that do not require preview stuff or anything not normal. So then I would like rebuild and package everything, for both my .NET 5 and .NET 6 users.

Second, there shouldn't be an expectation that you can use the latest SDK (say for .NET 6) to build successfully for earlier versions of .NET (say .NET 5) when you're using preview features. Same argument as above: let's say the preview features included language support. The .NET 6 SDK only has a single compiler that knows about the (now final) version of the language feature and thus is no longer able to build the way .NET 5's preview version would have been needed.

But the design document says "In practice this means that a multi-targeted project will only be able to use experimental features in a single TFM, specifically the current one" .

Let's unpack this. Most people multi-target by building a single project file multiple times, using the same .NET SDK. You could, of course, use GitHub Actions/Azure pipelines and build your project using different SDKs, but that's extremely complex, so I'm going to ignore this here.

For preview features that require support in the SDK (e.g. language features) a given SDK will only support preview features for the current version of .NET. For example, the .NET 6 SDK will only ship a single compiler and that compiler only has support for .NET 6's language preview features. Otherwise, we would have to also ship the .NET 5 compiler or continue to support each version of a preview feature, neither of which we consider viable from a cost standpoint.

However, as far as the preview feature is concerned this thing is just an attribute and analyzer. There are preview features that do not require tooling support (e.g. a new API). For those, multi-targeting will work just fine. If the API was changed between, say, .NET 5 and .NET 6, you might get compilation errors but since you're using multi-targeting you can of course use #if to accommodate for the differences.

So, if I got the meaning right, it means that when building for .NET 5 I would not be able to compile, because it is possible to use preview features only when compiling against .NET 6, so I cannot compile with multi-target anymore and I would have to - I suppose - remove the usage of preview features when targeting .NET 5.

There is no such enforcement. The only reason you might not be able to build is when there was a breaking change in the tooling support for preview features.

Does this help?

terrajobst commented 3 years ago

While we are here, I'm curious: have this 2019 proposal inspired in any way the introduction of Preview Features support 🙂 ?

Not to my recollection, but I can't say for sure. As far as I remember it, my idea for the preview feature came after I prototyped the platform-compat analyzer, which included a custom analyzer for deprecated APIs (AFAIR this was around 2017).

That being said, I think we have an amazing community and many of us have similar ideas and we often inspire each other. It is possible that I saw this issue and simply forgot about it. The idea presented here is very similar to what we ended up designing, so whether or not we read it doesn't take anything away from your design. It's like in science: very often multiple parties come up with similar ideas because both end up being inspired in similar ways.

jodydonetti commented 3 years ago

First of all, thanks for the thorough explanation, I think I now got to the gist of it.

I'll try to articulate my thinking.

First, let me state that there shouldn't be any expectation that the binary for .NET 5 will just work on .NET 6 -- because it depends on preview features.

Absolutely, the preview features + forward compatibility that was not what I was thinking about, more like preview features + backward re-compilability (being able to compile for older targets).

We could have reacted to feedback and made changes to the feature before we shipped it in .NET 6. If anything, you should expect that it very likely won't work because we absolutely intend to make breaking changes in preview features.

Makes total sense.

Second, there shouldn't be an expectation that you can use the latest SDK (say for .NET 6) to build successfully for earlier versions of .NET (say .NET 5) when you're using preview features. [...] For preview features that require support in the SDK (e.g. language features) a given SDK will only support preview features for the current version of .NET. For example, the .NET 6 SDK will only ship a single compiler and that compiler only has support for .NET 6's language preview features.

And there it is, I think this is the big change from the current situation.

Let me explain.

Now if I'm using the .NET 5 SDK I can easily build packages for - eg - .NET 3.1 or multi-target a single project for them both. With the introduction of the preview features concept, what is added to the table is that if - and only if - I've used a lang-level preview feature I cannot do that anymore (if I got this right).

From a lib authoring perspective this means that, if I've used certain preview features in my lib targeting .NET 6, and update the SDK to the .NET 7 when it will came out, I will no longer be able to even just build/pack a new minor version of my lib (maybe with a security fix) for .NET 6, because the SDK will not physically be able to do that.

And since the .NET 7 SDK is not able to compile for .NET 6 if the source is using those certain preview feature, this means I will not be able to do that even if my lib would not even target .NET 7 (so no multi-targeting here) but just only .NET 6.

This is a new situation in which we can find ourselves.

Maybe the solution would be using multiple SDKs (.NET 6 SDK + .NET 7 SDK) on the same machine, like you seem to mention below.

Did I got this right?

Just to be clear: I'm not necessarily saying this is the wrong decision, a bad design or something, I'm just trying to understand it correctly and to see if there are ways around it for those couple of new edge cases, because I can see lib authors fall into this like using some preview features, releasing a lib, later on update to a new .NET SDK and not being able to even compile the lib anymore.

But the design document says "In practice this means that a multi-targeted project will only be able to use experimental features in a single TFM, specifically the current one" .

Let's unpack this. Most people multi-target by building a single project file multiple times, using the same .NET SDK.

Exactly, but now if they are using certain preview features in .NET 6, they will not be able to do this anymore when using the .NET 7 SDK (even if compiling with a .NET 6 only target) am I right? Again, not saying this is the wrong decision, but it's a new situation and we should all be aware of it and see if and how it can be solved.

You could, of course, use GitHub Actions/Azure pipelines and build your project using different SDKs, but that's extremely complex, so I'm going to ignore this here.

Agree, even though it seems to me that, as of today, this would be the only way to solve this issue.

Otherwise, we would have to also ship the .NET 5 compiler or continue to support each version of a preview feature, neither of which we consider viable from a cost standpoint.

I understand this and honestly FWIW I agree with. It's just that there's a block I don't understand how to pass.

However, as far as the preview feature is concerned this thing is just an attribute and analyzer. There are preview features that do not require tooling support (e.g. a new API). For those, multi-targeting will work just fine. If the API was changed between, say, .NET 5 and .NET 6, you might get compilation errors but since you're using multi-targeting you can of course use #if to accommodate for the differences. [...] There is no such enforcement. The only reason you might not be able to build is when there was a breaking change in the tooling support for preview features.

And this is what I was really hoping to read 👏 If I got this right this means that the "not being able to even compile" situation I described above would not happen for all usages of preview features, but only for the small portion that used the lang-level ones or similar.

And that, in turn, should mean (and correct me if I'm wrong) that if I create some "lightweight" preview features from scratch in my own libs (and not using platform/language preview fetures themselves) I can expect to be able compile freely without restrictions?

Maybe the wording in the specs should change from "In practice this means that a multi-targeted project will only be able to use experimental features in a single TFM, specifically the current one" to something like "In practice this means that a multi-targeted project or a project targeting an older TFM will only be able to use certain experimental features in a single TFM, specifically the current one".

Would this make sense?

Does this help?

I hope I got this right but yeah, definitely helpful, thanks again!

jodydonetti commented 3 years ago

ping

danmoseley commented 3 years ago

@terrajobst