devigned / profile-playground

2 stars 1 forks source link

[Bala] Should we have a "Latest" Profile #2

Open devigned opened 7 years ago

devigned commented 7 years ago

I don’t think there should be anything named ‘Latest’ in the folder structure, since Latest is always a moving target. We should just stick to api-version for all the folder names IMO

devigned commented 7 years ago

I was up in the air on this, but feel like we should have both a Latest as well as a Preview virtual profile. Latest would have all the latest stable and Preview would have the latest including preview versions of APIs.

I think we should strongly push developers to select a given profile since this will lead them to building upon an API that will not break from one version to another.

Also, I think this needs to be paired with documentation stating we will not be honoring semantic versioning within the Latest and Preview namespaces.

I do see the danger here. Thoughts?

marstr commented 7 years ago

Anytime you expose something explicitly called "latest" I believe that you attract people to tie themselves to it and expect to not be broken. This goes against the major value proposition of profiles. Come to think of it, I'm not even sure what the upside of having a "latest" profile is. People becoming confused about which is the greatest date in a list seems unlikely. :)

Just my $0.02!

devigned commented 7 years ago

How do we advise people to use the latest functionality in Azure? Developers have grown accustomed to getting the latest bits for accessing Azure with the newest packages.

Do we tell them to curate their own profile?

johanste commented 7 years ago

In my opinion, we should not tell anybody outside of MS to create a thing called a Profile. I would strongly prefer that we own that term (it would make it more useful for documentation etc.)

johanste commented 7 years ago

Somewhat tangential to this, we should take documentation and samples into account when we think about what "latest" means.

Blog posts/samples tend to be written when a feature is straight out of the oven, and thus, I assume that there will be a bias towards them not being included in a profile yet. I also assume that there will be little appetite to retroactively go back and fix up all samples once the required API versions are included in a profile.

devigned commented 7 years ago

@johanste I like your thoughts on documentation. Locking into a profile in documentation would be awesome for keeping those docs relevant as time goes on.

johanste commented 7 years ago

In addition, for C#, the "latest" will likely only work for applications, not utility libraries. A utility library compiled against an earlier version of "latest" will not work in the context of an application with the latest "latest" since the types will be different.

lmazuel commented 7 years ago

We got several discussion with @johanste on this. I agree that "latest" should just be the default for a customer who does not care about "profile" and should not be an explicit concept.

This is solved with versioning and documentation of packages, I have the following strategy for Python:

Indeed, fixing "minor" will never change ApiVersion under the hood, fixing "major" will never change surface API (but might change ApiVersion).

This allows to avoid talking explicitly of "profile" or "latest", but having a strong documentation of consistent behavior.

devigned commented 7 years ago

@lmazuel this implicitly pushes developers to use "Latest", which will inevitably break causing increase the major version often. This will cause:

Either way, I think there needs to be strong documentation around profiles. I do think developers should make a choice ahead of time about which APIs they would like to target.

Latest profile should have documentation stating it is ever changing and not part of semantic versioning of the library.

The thing that worries me most about latest as default, and I don't think it can be overstated, is what we express to developers by making it the default way of using the libraries. It's a really bad statement.

marstr commented 7 years ago

@lmazuel says: This is solved with versioning and documentation of packages, I have the following strategy for Python:

Semantic versioning: major.minor.bug By "breaking", I'll mean: The RestAPI changed OR the Swagger changed enough to break (name, extensions, etc.) If new ApiVersion is breaking, new "major" If new ApiVersion is not breaking, increase "minor" Document public Azure users to fix "major" to get fixed behavior Document sovereign/stack users to fix "minor" to get fixed behavior

For the Go SDK, I've tried to avoid this strategy because it gives control of major version bumps to the the lowest common denominator of a lot of people. (At the moment we have an incredibly unsustainable monthly breaking change.) I believe that there are some real advantages behind moving to a strategy where supported API Versions of the service are completely separated from the semantic version of the SDK. Adding a latest profile forces us to do one of two things:

  1. Return to the system where service and SDK versions are tightly coupled.
  2. Say that our versioning strategy applies to everything except the "Latest" profile.

I would rather say if you want to tie yourself to the latest behavior, try to make it easy to use services outside of the profile system. Consider the approach that I've prototyped for non-profile resource access in the Go SDK: Azure/azure-sdk-for-go:experimental/allAPIVersions. Doing so allows us easily expose the "latest" version of services without ever breaking anybody.

edit: sorry, @devigned and I race conditioned a bit! His response wasn't present while I was typing mine up.

johanste commented 7 years ago

There are a couple of assumptions that we need to clarify (and which may be different for different languages due to how the "normally" handle versioning of packages).

devigned commented 7 years ago

@lmazuel and @johanste how often do you think minor fixes are going to be applied backward from the latest version?

I'm not saying a developer will have a floating version dependency, rather they will have a major version dependency, and will expect to get updates. As of the way we publish packages today, rarely are we going back to a previous major version to regenerate and provide fixes. More often than not, we only provide those fixes to the latest major. By doing this, we force folks to take on breaking changes to get the latest fixes. I don't think that is desirable.

I think there are benefits for users to use a profile, even for public. I would expect the oldest profile to be 6 months old. With that assumption, I think it's safe to say users will have fairly new functionality. If they need the cutting edge, then that's why latest or that specific version is available.

johanste commented 7 years ago

It seems like there is also a metaphysical discussion about what the "latest" profile is. In my mind, it is what I get if I don't specify anything else.

devigned commented 7 years ago

@johanste I see latest as the profile containing the highest date api version for a each resource type published in the Open API specs repo. This virtual profile is what I'm speaking of when I say latest.

The rest of the discussion is how we serve that view of Azure to a consumer.

lmazuel commented 7 years ago

My expectation 1 is "most users are targeting only public Azure". As @johanste , I want this Tutorial to really be as simple as possible with no question about profile or ApiVersion.

Also as @johanste said, the point of major version is to fix breaking changes. As a developer, you can use that as a reference of "should I update my package or not?"

Note that for a service that changes a lot, the package should be released as "unstable/preview" (in Python version will be like 1.0.0a1 and installed using pip install --pre azure-mgmt-myservice). This allow to follow semantic versionning and suggest at the same time to people that this package is at risk.

The question of "should we fix the old Swaggers when we change a name convention on latest", and then how often minor fixes would be applies to old version, this can be discussed :). I think we should always fix all Swaggers to avoid people to handle a "if/else" statement if they want to support several ApiVersion for some reasons. @johanste tends to disagree if I remember correctly :) Swagger should be language agnostic and as close as possible of the RestAPI, then any fixes should be backported.

johanste commented 7 years ago

@devigned, I would expect a developer to either have a major or major + minor version lock depending on if they are targeting public azure or something else, and if they have specified an explicit profile or not.

And I would expect us to back-port fixes to earlier swagger documents if they were broken (if that is what you are referring to with minor fixes). I also really want to avoid breaking changes if at all possible - even across API versions.

@lmazuel, my disagreement is primarily around when we rename things today. If the rename was just to make things prettier (which many of our renames are), then backporting is probably not worth it. But, then again, I would likely have argued that the rename shouldn't have happened in the first place even for newer swaggers :) The team is working on putting mechanisms in place to allow for deprecation/soft renames, which should take care of most of my concerns :)

devigned commented 7 years ago

@marstr I'm having a tough time finding a good example of the Go usage. Can you provide a better link or more information on where start?

devigned commented 7 years ago

@johanste said: I also really want to avoid breaking changes if at all possible - even across API versions.

Agreed, but I don't think we have much say in this when it comes to new API versions. We have stronger influence over the existing versions, and this is one of the key reasons behind saying that latest is unstable.

johanste commented 7 years ago

@devigned, many of the binary breaking changes we've seen over the last couple of weeks have been purely swagger model/code gen related. And that we have significantly more power over. And with the versioning scheme @lmazuel outlines, for public azure, if you lock to major + minor, you will not cross API versions.

devigned commented 7 years ago

@johanste that's the same level of stability as today (not much).

johanste commented 7 years ago

@devigned, I'll happily admit that I don't know the ins and outs for all lanugages w. regard to how we are determining version bumps today. It is only anecdotal from my conversation with language SDK owners. And from those discussions, API version has not been mentioned as a direct factor in the versioning of the package (and, in the case of CLR, the assembly version).

I believe that we too often are conflating two different things in our discussions; breaking changes within a given API version and breaking changes resulting in moving to new API versions.

100% agree that we don't have control over the latter. To the extent that we are unable to mitigate this, it will be friction that our customers will eventually feel as they are moving forward. Hopefully, the value they see outweighs the cost of moving.

For the former, we need to (IMHO) do a much better job moving forward. This is irrespective of any recommendations we make w. regards to using a latest known API versions" virtual profile or a "last published" "real" profile. If we make breaking changes for a given API version, we'll be in trouble either way.

I don't think I would feel comfortable recommending that developers should use a set of REST/service APIs that is, on average, 3 months old.

I think we should drill into a couple of specific scenarios to see if and how we can mitigate breaking changes and what the end user experience would look like for one of our (insert Python/C#/javascript/go here) users.

devigned commented 7 years ago

@johanste reflecting on your perspective, I can see where you are coming from, and the expectation of limiting breaking changes across API versions seems completely reasonable. It's particularly reasonable when you think about it on a single team by team basis (a single library). One library at a time should not have many breaking changes or could be dissuaded from making breaking changes effectively.

Another point to consider is what @marstr is saying. In Golang, our major semantic version is the sum total of all breaking changes across all of the APIs since his project is run as a mono-repository (all the services together).

I think there is value in providing less granular packages to simplify acquisition and provide richer APIs on top of the REST clients (Fluent, etc..). Think about having a package like "Azure.Core" which contained the most used Azure core services. How often do you think the major version of that library will increment (sum of all of the major increments of it's sub parts)? Do you think it will be too many?

If we don't think it will be too many, and we are deeply committed to ensuring the absolute minimum of breaking changes in the default namespace, I think I can get behind having latest as the default.

johanste commented 7 years ago

I am indeed deeply committed to ensuring the absolute minimum of breaking changes in the default namespace. And the more people on the team that I can recruit to this line of thinking, the better - 'cause I can't enforce that all by my lonesome self :)

Having a set of meta/composite packages that aggregate multiple packages together to simplify acquisition seems like a perfectly acceptable attempt at solving that specific problem. This also abstracts away the underlying structure of low-level packages in a way appropriate for each language (which is likely different for a strongly typed language and a more dynamic language). This can be multi-tiered if we so chose (azure profile -> set of grouped/meta packages -> individual packages for a small set of RPs/RTs (basically mapping to swagger file granularity at this level)

If we chose to go in the other direction (assume breaking changes are prevalent all the time, and that we have very little control over it), whatever we do on the client side is only going to be a temporary relief for our customers - as soon as they need to use a new API version ('cause that is where the goodness they need is available), they will have to absorb all the breaking changes that were introduced.

markcowl commented 7 years ago

I think the latest package, or an equivalent is a must. While many scenarios require knowing about api-versions, requiring api-versions in the namespaces is a significant downgrade in the user acquisition experience.

As far as forward compatibility goes, this is largely related to how the 'latest' profile is defined, we cannot use the api-version namespaces in the types and operations included in latest, largely alleviating the compatibility problem.

Since we also seem to be talking about packaging, proliferation of packages is a large downgrade in the acquisition experience, and one that upper management will have a very negative opinion about. We cannot produce more packages to support multiple api-versions, and probably need to produce fewer packages.

Certainly, in order to have a useful package I can use in an app, I must have access to resource manager, and I am likely to want Storage, KeyVault, Compute, and Network as well. IMO we should ship a core management package with at least this functionality

devigned commented 7 years ago

@markcowl I like your thoughts on having a "Core" set of Azure functionality distributed together. I've linked the related issue for Core packaging here if others would like to discuss that in that thread: https://github.com/devigned/profile-playground/issues/13