Closed pkriens closed 12 years ago
-1, adding methods to a Java interface is a backwards incompatible change and should cause a major version bump. I think OSGI's guidance is too liberal here. What about when you are both a provider and a consumer? When you provide consumed interface instances with instances of other interfaces in the same package. You are forced to declare MAJOR.MINOR compatibility only, not just MAJOR compatibility. We then loose the ability to indicate a new release that is backwards compatible, but has new features - since MICRO/PATCH releases should be for bug fixes only.
You could argue that you could split the package up, to prevent a situation where you are both a provider and a consumer, but that ends up with a "single class per jar" file situation.
P provides and consumes API-1.3 C consumes API-1.3
So I can deploy: { P-1,C-1, API-1.3 }
So what happens when I change API into 1.4, lets say I add a new responsibility to a method and I let C-2 take advantage of it. If I deploy { P-1, C-2, API-1.3 } then C-2 will use P-1 and P-1 is oblivious of the change. So even if P-1 could be backward compatible as a consumer, it will break other consumers when it is used as provider.
So consumers can be backward compatible on MAJOR but I fail to see how a provider can be anything but compatible on MAJOR.MINOR? Any other party that relies on your implementation would break if it could assume it implemented A-1.4 ...
Notice that the OSGi versions packages. Imho JAR versions are not very interesting except for identity because a JAR is rarely cohesive, nor should it have to be. JARs are usually an aggregation of implementation and specifications and you should not be forced to aggregate their dependencies (download the internet ...). A JAR is a deployment unit, not a logical module.
The beauty of the current spec is its simplicity. I can understand it, therefore I can implement it.
Although OSGi versioning may be more robust and perhaps solves 100% of cases, it's no good if nobody bothers to use it because it's too difficult to understand. I'd rather have a versioning scheme that solves 80% of the problem and is used correctly by 80% of programmers, than a perfect scheme used correctly by 10% of programmers with the other 90% following ad hoc or incompatible schemes or failing to use the perfect scheme correctly.
For every complex problem there is a beautiful simple solution ...
that is just plain wrong ... This argument is imho an illustration of the attitude in our industry to put the simplicity of solutions ahead of its value. Though you can get away with this for a long time you will find that more and more problems appear as you try to build higher layers. If you build a foundation for a house, a miniscule deviation from perfect flatness is not a big deal. When you build a skyscraper, such deviation can cause big problems.
Versioning is hard and can only be done well if the computer takes over the majority of the work; mainly if the developers only have to provide detailed information about parts so that the computer can calculate the overview. For this, I think we need proper rules, preferably formal. The problem I am trying to raise is serious because in your model a consumer can be bound to a non-compatible provider. This seems to be such a common pattern today that I have a hard time understanding that it should not be taken serious.
Don't get me wrong. I am not claiming I have the best solution or that even the problem I am raising is real but at least I would have expected a debate on merit in this forum and not being told that we're looking where the problems is because the light is better over here ...
@bnd (re: versions of providers and consumers)
Point taken about jar files. Yes, we should be talking about versioning packages.
About compatibility: I'm saying this: let MAJOR denote an incompatible change (including abstract methods and new responsibility of provider). Let MINOR denote a new feature (that is backwards compatible - ie. not new interface/abstract methods or new responsibilities). Let MICRO denote ONLY bug fixes. (I believe that this is SemVer compatible as SemVer is now).
Provider can bind to [1.3,2.0) of the API because no changes in 1.4, 1.5, 1.6 etc will be backwards incompatible to the provider. Consumers should probably also bind to [1.3,2.0) unless they have audited the incompatible changes between 1.3 and 2.0 and are certain they certain that their usage is not affected, in which case they can bind [1.3,3.0)
You may ask, if no new responsibilities and abstract methods are allowed in MINOR releases, then what can be allowed? Well, not everything in a package is an interface. New methods in utility classes for example. New Strategy classes etc.
@jesselong: Two things:
1) I think the situation is more dire. if the API acts as its own provider (new methods in a class inside the API) it is a change for which the provider can be compatible. However, would you feel comfortable if you signed a contract (the API) and someone modified that contract but you're liable? If the change is completely self contained in the API I would consider this a minor change because it would be backward compatible the provider and the consumer.
2) You seem to assume that the consumer/provider can inspect the artifacts for compatibility. Isn't the whole purpose of semantic versions to encode the evolution of an artifact so that compatibility does not require inspection/verification? It can be decided by asserting a version range in the assurance that the author will correctly mark up the changes in the version of its next artifact. i.e. a version describes the type of delta relative to the previous artifact?
When I point coworkers and coder friends at SemVer I get an "ooh, nice". There's value in brevity and simplicity when your success depends on adoption.
Except when half a year later they run into problems ...
Unfortunately, versioning is about communicating about the future and that happens to be particularly hard. This semver paper is a tremendous step in the right direction to make people understand that a version is not just some identifier, it is the syntax for compatibility rules for the evolution of an artifact. Imho these rules are inherently more complex than the current paper allows and it would be a pity to create steam around this model only for people to discover that for a significant number of areas (i.e. API based programs) it just does not work in practice.
Maybe I am too complicated but there are enough at first sight simple solutions that turn out simplistic. As Einstein said, things should be as simple as possible but not simpler ...
The reasoning behind the OSGi provider/consumer binding specification is sound, and Semantic Versioning is compatible with it. That being said, all of this is outside the scope of the SemVer spec. SemVer is set up to make the kinds of compatibility specs that OSGi uses possible (indeed that is the whole point), but I don't think it's necessary to describe that mechanism here. The way I see it, SemVer describes how version numbers change. The way that software consumes that information is a separate issue.
I like the work done in the semantic version, it is very much aligned what we're doing in the OSGi, the syntax of the version is almost identical (our qualifier is a fourth field). However, in OSGi we have the concept of a consumer and provider of an API because we allow different implementations of the same API. For providers of an API we defined the minor as the backward compatibility part and for consumers we defined the major part. Using the mathematical notation we use in OSGi, assuming an API has version 2.3.4.qualifier:
I think it would be very valuable if this semantic version paper also explains the difference between consumers and providers of an API. Even better, if we could align somehow.