osgi / bugzilla-archive

Archive of OSGi Alliance Specification Bugzilla bugs. The Specification Bugzilla system was decommissioned with the move to GitHub. The issues in this repository are imported from the Specification Bugzilla system for archival purposes.
0 stars 1 forks source link

Define versions for Java EE packages #1448

Closed bjhargrave closed 12 years ago

bjhargrave commented 14 years ago

Original bug ID: BZ#1549 From: Alasdair Nottingham <not@uk.ibm.com> Reported version: R4 V4.2

bjhargrave commented 14 years ago

Comment author: Alasdair Nottingham <not@uk.ibm.com>

In the JPA Service specification the javax.persistence package for JPA 2 is specified to have a version of 1.1. A JPA 2.0 JPA provider will run a JPA 1.0 written entity. Going with 2.0 as a package version means that the normal version range of [1.0,2.0) would not select a JPA 2 provider, so version 1.1 was chosen.

This was the decision made at the Southampton 2010 face to face and is a reasonable compromise.

This sets the the precedent for the OSGi package version to differ from the JSR version. Unless we fully define the package versions for other Java EE packages now the OSGi community will not be sure what version should be associated with those other Java EE packages.

As a result the community will either choose the JSR version, or attempt to predict the version the alliance will standardise on. This is likely to cause problems if the alliance chose to specify the package differently from the community.

It is also possible that the different parts of the community will choose different versions.

It is possible that these issues could cause fragmentation and confusion about enterprise OSGi which will hurt uptake of enterprise OSGi.

bjhargrave commented 14 years ago

Comment author: hal.hildebrand@oracle.com

To define the problem a bit further, the solution must balance the various competing requirements. On one aspect, we would like to ensure that client bundles who specify version X in their imports will always be satisfied with package versions which are backward compatible with the version X specified. One solution to this problem is to impose an OSGi version scheme on specifications which the OSGi does not control. This requires an implicit translation of the OSGi version to the external specification version. It is my belief that this is a known fail and will only sow needless confusion in a community we are trying to ease their adoption of OSGi.

One potential solution to this dilemma is to proactively make use of an already existing OSGi ability to export a package multiple times, using different version numbers. This solution solves both requirements and uses an already existing OSGi capability.

The assertion is that this solution is "untested" and the implications are "unknown". Therefore what I suggest is that we take it upon ourselves to come up with a system of test cases which will exercise this solution such that we can gather real data on whether there are unforeseen issues that would cause us further problem. Being proactive in this regard is preferable than simply relying on some nebulously defined threshold of "experience" with this already existing feature before we're comfortable with using it.

While this solution is not as elegant as simply using OSGi versions, it does have the advantage of creating a framework for dealing with specifications which OSGi does not control. The solution allows these specifications to define their own rules regarding versioning rather than imposing OSGi's viewpoint. Further, the communities which use these specifications will not find themselves confused as to what version is what, nor will it require the OSGi to maintain a translation table of the specification version interactions. While not perfect, this solution does obey the principle of least surprise.

bjhargrave commented 14 years ago

Comment author: @tjwatson

What happens when some JSR package really does have a backwards breaking change?

JPA 2.0 is backward compatible with JPA 1.0

So the suggestion is that we export the following versions for JPA 2.0

version=1.1; version=2.0

What happens if JPA 3.0 is not backwards compatible? We would have to then version only at 3.0. The OSGi version cannot move to 2.0 because the JSR version already used 2.0. I guess this is appropriate for the JPA case because it will show breaking changes either way for folks that were using the OSGi version of 1.1. Personally I think this is dangerously hard to explain. I would prefer we pick a single version and my gut tells me to go with the version the JPA community thinks is the most obvious this round of the spec. (2.0, right?)

bjhargrave commented 14 years ago

Comment author: hal.hildebrand@oracle.com

What happens if JPA 3.0 is not backwards compatible? We would have to then version only at 3.0. The OSGi version cannot move to 2.0 because the JSR version already used 2.0. I guess this is appropriate for the JPA case because it will show breaking changes either way for folks that were using the OSGi version of 1.1. Personally I think this is dangerously hard to explain. I would prefer we pick a single version and my gut tells me to go with the version the JPA community thinks is the most obvious this round of the spec. (2.0, right?)

Let's be clear we're not arguing about the JPA spec in particular, rather a particular strategy for dealing with specifications outside of the control of OSGi.

Thus, I'm not sure what problems the strategy of multiple exports causes. In the case where 3.0 breaks backward compatibility with versions 1.0, 1.1 and 2.0, all clients importing versions 1.0, 1.1 or 2.0 will never import 3.0. All clients importing versions <= 2.0 will still be satisfied using multiple exports for version 2.0. Clients requiring 3.0 will not see 1.0, 1.1 and 2.0.

Why is this confusing, given these is precisely the behavior specified by the external specifications, and what the client already expects? Further, no explanation is needed, as this is delegated to the external specifications - i.e. they are the ones who define their backward compatibility between versions.

bjhargrave commented 14 years ago

Comment author: Michael Keith <michael.keith@oracle.com>

Given that the Enterprise spec is targeted at enterprise developers, and that they are typically already familiar with the EE technologies, the version numbers that are ubiquitously associated with the release impls are the ones that developers are going to look for. The suggestion that there is a publically consumable "marketing" version that is different from the package versions is not practical since the package versions are publically consumed -- by the developers. Developers that program with [package] versions must be aware of and understand some well-defined and well-advertised semantic associated with the versions, else they can't do their job.

It is documented that the OSGi best practice is to use minor version increments for releases that maintain backward compatibility, etc. etc. but I have still not seen it indicated anywhere in the spec that using a major increment for a heap of new functionality (while still remaining backwards compatible) is disallowed. In fact, the only argument that has surfaced around this is that some of the tools won't cover this case. The problem, of course, is that this is a fully supported case, so if tools don't cover it then it is a deficiency in the tools, not the spec (or a developer that chooses to follow such a practice).

I also think it is relevant that in most cases the packages of the specific subsystem or technology are interwoven, and rely on each other. They are shipped and must be used as a bundle, so there is no use case for trying to a reuse a package version across releases, even if there are no changes to the classes in that package. They will all be incremented together, and with the same package version (just like the release version, coincidentally).

bjhargrave commented 14 years ago

Comment author: Glyn Normington <gnormington@vmware.com>

The assertion is that this solution is "untested" and the implications are "unknown".

SpringSource dm Server has been using this approach for the last 18 months with no complaint from users. Its system bundle package versions are shown below. javax.transaction(.xa) is an interesting example - exported with 3 distinct versions.

org.osgi.framework.system.packages = \ javax.accessibility,\ javax.activation,\ javax.activation;version="1.1.1",\ javax.activity,\ javax.annotation,\ javax.annotation;version="1.0.0",\ javax.annotation.processing,\ javax.crypto,\ javax.crypto.interfaces,\ javax.crypto.spec,\ javax.imageio,\ javax.imageio.event,\ javax.imageio.metadata,\ javax.imageio.plugins.bmp,\ javax.imageio.plugins.jpeg,\ javax.imageio.spi,\ javax.imageio.stream,\ javax.jws,\ javax.jws;version="2.0",\ javax.jws.soap,\ javax.jws.soap;version="2.0",\ javax.lang.model,\ javax.lang.model.element,\ javax.lang.model.type,\ javax.lang.model.util,\ javax.management,\ javax.management.loading,\ javax.management.modelmbean,\ javax.management.monitor,\ javax.management.openmbean,\ javax.management.relation,\ javax.management.remote,\ javax.management.remote.rmi,\ javax.management.timer,\ javax.naming,\ javax.naming.directory,\ javax.naming.event,\ javax.naming.ldap,\ javax.naming.spi,\ javax.net,\ javax.net.ssl,\ javax.print,\ javax.print.attribute,\ javax.print.attribute.standard,\ javax.print.event,\ javax.rmi,\ javax.rmi.CORBA,\ javax.rmi.ssl,\ javax.script,\ javax.script;version="1.1",\ javax.security.auth,\ javax.security.auth.callback,\ javax.security.auth.kerberos,\ javax.security.auth.login,\ javax.security.auth.spi,\ javax.security.auth.x500,\ javax.security.cert,\ javax.security.sasl,\ javax.sound.midi,\ javax.sound.midi.spi,\ javax.sound.sampled,\ javax.sound.sampled.spi,\ javax.sql,\ javax.sql.rowset,\ javax.sql.rowset.serial,\ javax.sql.rowset.spi,\ javax.swing,\ javax.swing.border,\ javax.swing.colorchooser,\ javax.swing.event,\ javax.swing.filechooser,\ javax.swing.plaf,\ javax.swing.plaf.basic,\ javax.swing.plaf.metal,\ javax.swing.plaf.multi,\ javax.swing.plaf.synth,\ javax.swing.table,\ javax.swing.text,\ javax.swing.text.html,\ javax.swing.text.html.parser,\ javax.swing.text.rtf,\ javax.swing.tree,\ javax.swing.undo,\ javax.tools,\ javax.transaction,\ javax.transaction;version="1.0.1",\ javax.transaction;version="1.1.0",\ javax.transaction.xa,\ javax.transaction.xa;version="1.0.1",\ javax.transaction.xa;version="1.1.0",\ javax.xml,\ javax.xml;version="1.0.1",\ javax.xml.bind,\ javax.xml.bind;version="2.0",\ javax.xml.bind.annotation,\ javax.xml.bind.annotation;version="2.0",\ javax.xml.bind.annotation.adapters,\ javax.xml.bind.annotation.adapters;version="2.0",\ javax.xml.bind.attachment,\ javax.xml.bind.attachment;version="2.0",\ javax.xml.bind.helpers,\ javax.xml.bind.helpers;version="2.0",\ javax.xml.bind.util,\ javax.xml.bind.util;version="2.0",\ javax.xml.crypto,\ javax.xml.crypto;version="1.0",\ javax.xml.crypto.dom,\ javax.xml.crypto.dom;version="1.0",\ javax.xml.crypto.dsig,\ javax.xml.crypto.dsig;version="1.0",\ javax.xml.crypto.dsig.dom,\ javax.xml.crypto.dsig.dom;version="1.0",\ javax.xml.crypto.dsig.keyinfo,\ javax.xml.crypto.dsig.keyinfo;version="1.0",\ javax.xml.crypto.dsig.spec,\ javax.xml.crypto.dsig.spec;version="1.0",\ javax.xml.datatype,\ javax.xml.namespace,\ javax.xml.parsers,\ javax.xml.soap,\ javax.xml.soap;version="1.3.0",\ javax.xml.stream,\ javax.xml.stream;version="1.0.1",\ javax.xml.stream.events,\ javax.xml.stream.events;version="1.0.1",\ javax.xml.stream.util,\ javax.xml.stream.util;version="1.0.1",\ javax.xml.transform,\ javax.xml.transform.dom,\ javax.xml.transform.sax,\ javax.xml.transform.stax,\ javax.xml.transform.stream,\ javax.xml.validation,\ javax.xml.ws,\ javax.xml.ws;version="2.1.1",\ javax.xml.ws.handler,\ javax.xml.ws.handler;version="2.1.1",\ javax.xml.ws.handler.soap,\ javax.xml.ws.handler.soap;version="2.1.1",\ javax.xml.ws.http,\ javax.xml.ws.http;version="2.1.1",\ javax.xml.ws.soap,\ javax.xml.ws.soap;version="2.1.1",\ javax.xml.ws.spi,\ javax.xml.ws.spi;version="2.1.1",\ javax.xml.xpath,\ org.ietf.jgss,\ org.omg.CORBA,\ org.omg.CORBA_2_3,\ org.omg.CORBA_2_3.portable,\ org.omg.CORBA.DynAnyPackage,\ org.omg.CORBA.ORBPackage,\ org.omg.CORBA.portable,\ org.omg.CORBA.TypeCodePackage,\ org.omg.CosNaming,\ org.omg.CosNaming.NamingContextExtPackage,\ org.omg.CosNaming.NamingContextPackage,\ org.omg.Dynamic,\ org.omg.DynamicAny,\ org.omg.DynamicAny.DynAnyFactoryPackage,\ org.omg.DynamicAny.DynAnyPackage,\ org.omg.IOP,\ org.omg.IOP.CodecFactoryPackage,\ org.omg.IOP.CodecPackage,\ org.omg.Messaging,\ org.omg.PortableInterceptor,\ org.omg.PortableInterceptor.ORBInitInfoPackage,\ org.omg.PortableServer,\ org.omg.PortableServer.CurrentPackage,\ org.omg.PortableServer.POAManagerPackage,\ org.omg.PortableServer.POAPackage,\ org.omg.PortableServer.portable,\ org.omg.PortableServer.ServantLocatorPackage,\ org.omg.SendingContext,\ org.omg.stub.java.rmi,\ org.w3c.dom,\ org.w3c.dom.bootstrap,\ org.w3c.dom.css,\ org.w3c.dom.events,\ org.w3c.dom.html,\ org.w3c.dom.ls,\ org.w3c.dom.ranges,\ org.w3c.dom.stylesheets,\ org.w3c.dom.traversal,\ org.w3c.dom.views ,\ org.xml.sax,\ org.xml.sax.ext,\ org.xml.sax.helpers

bjhargrave commented 14 years ago

Comment author: david.savage@paremus.com

Thus, I'm not sure what problems the strategy of multiple exports causes. In the case where 3.0 breaks backward compatibility with versions 1.0, 1.1 and 2.0, all clients importing versions 1.0, 1.1 or 2.0 will never import 3.0. All clients importing versions <= 2.0 will still be satisfied using multiple exports for version 2.0. Clients requiring 3.0 will not see 1.0, 1.1 and 2.0.

I think this works now but could get us into a world of pain in future releases. If JPA 3.0 has a breaking api change what is the OSGi version now? It can't be 2.0 even though this is what people following the OSGi conventions would expect as that is already taken. It could be 3.0 to match the JPA release but now consider the nightmare scenario that JPA 3.1 is also backwards incompatible. Now the OSGi version must be 4.0 to match the JPA 3.1 release. Once we're in this scenario there can never be a JPA->OSGi coherent match as JPA 4.0 would have to be OSGI 5.0.

This is a fairly convoluted example but given the OSGi alliance is not in control of what version numbers an external spec uses we cannot mandate that they don't do daft things which break our initial assumptions and I think shows the weakness in this approach?

Also this leap frogging of versions means a client cannot be certain that a given version range is going imply a sane API contract. We only push forward the moment of pain when we have to decouple ourselves from spec=api.

bjhargrave commented 14 years ago

Comment author: david.savage@paremus.com

Following are really just a series of notes but I hope they're clear enough to contribute to the discussion.

Guidelines

I keep thinking of Pirates of the Caribean here - "the code is more of what you call guidelines than actual rules"

_Ease_ofuse

There has been discussion regarding ease of use which I completely agree with - I can already here the future howls of pain if we announce that 2.0=1.1 But this is much like fear of the barbarian horde, in the end we need to make sure whatever decision we make it is in the best interest of Rome :)

Module/Spec!=API

An API is a construct that can be realized in a software environment where as a specification is simply a grouping of instructions for how to build an API and a module is only a grouping of API's.

Just as it is possible to move packages between modules so making the version of the API distinct from the version of the module. Instructions from a specification can be moved between specification documents and they still refer to the same API. This has a connection with the issue Tim brought up in the f2f about the specification chapter numbering. The numbering is global between specification documents so that it has a unique identity distinct from the document that contains it.

bjhargrave commented 14 years ago

Comment author: david.savage@paremus.com

Also some notes on tooling. I can only really comment on Sigil but here we have a couple of tools for dealing with versioning problems that I think are relevant.

_ImportPolicies

Package import policies in Sigil are inherited by projects. At the top level of the Sigil build there is a file sigil-defaults.properties [1] that specifies (among other things ) the version import policy that should be used for different packages in sub projects. Leaf projects [2] now no longer need to specify the import ranges unless they explicitly need different rules.

This makes the problem of choosing a version to import a top level administrative choice vs one that most developers need to worry about. So in sigil at least you only need to say [1.1, 2.0) or [1.0,3.0) once. This idea is shamelessly borrowed from Maven POM's but using package dependencies vs module dependencies.

Libraries

This idea is borrowed from Springs Import-Bundle concept that allows a developer to import a number of packages as a group. However in Sigil (and this is only partially realised at the moment) a library is a development time construct vs a runtime construct. Referencing a library results in a number of packages being imported into the bundle - but it is wired in at build time vs runtime.

[1] http://svn.apache.org/viewvc/felix/trunk/sigil/sigil-defaults.properties?view=markup

[2] http://svn.apache.org/viewvc/felix/trunk/sigil/common/core/sigil.properties?view=markup

bjhargrave commented 14 years ago

Comment author: @pkriens

I think we need to take into account there is a difference in compatibility rules for implementers and clients of a package. For this reason, in bnd, I calculate how a bundle uses a package. If it implements interfaces in this package it uses the implementation policy, otherwise it uses the client policy.

The implementation policy is [.,<major,minor], e.g. [1.1,1.1] The client policy is [.,<major+1>), e.g. [1.1,2)

I am afraid the difference between implementation and client policies make multiple exports not work as suggested unless an additional discriminator is used on the export clause so clients and implementations can use the real version and the mimicked version differently.

One more remark, we are entering a new world. Though JEE is using "versions" in many places, it never has specified package versions nor provides any guidelins of how things are supposed to be versioned. I do think OSGi can actually specify them without much peril because it is very unlikely that those packages are already versioned anyway.

Maybe we should start at version 100 in those cases to make it clear these packages are not aligned with their enclosing specification.

bjhargrave commented 14 years ago

Comment author: @tjwatson

I think we need to take into account there is a difference in compatibility rules for implementers and clients of a package. For this reason, in bnd, I calculate how a bundle uses a package. If it implements interfaces in this package it uses the implementation policy, otherwise it uses the client policy.

The implementation policy is [.,<major,minor], e.g. [1.1,1.1] The client policy is [.,<major+1>), e.g. [1.1,2)

Peter, do you do this for all packages? This would cause some pretty brittle version ranges for any bundle that implements BundleActivator or *Listener right?

To do this correctly there needs to be additional annotations on the interfaces indicating if they are to be implemented by clients or not and we need to indicate when these types if interfaces are changed in breaking ways.

bjhargrave commented 14 years ago

Comment author: hal.hildebrand@oracle.com

I think this works now but could get us into a world of pain in future releases. If JPA 3.0 has a breaking api change what is the OSGi version now? It can't be 2.0 even though this is what people following the OSGi conventions would expect as that is already taken. It could be 3.0 to match the JPA release but now consider the nightmare scenario that JPA 3.1 is also backwards incompatible. Now the OSGi version must be 4.0 to match the JPA 3.1 release. Once we're in this scenario there can never be a JPA->OSGi coherent match as JPA 4.0 would have to be OSGI 5.0.

This is a fairly convoluted example but given the OSGi alliance is not in control of what version numbers an external spec uses we cannot mandate that they don't do daft things which break our initial assumptions and I think shows the weakness in this approach?

Also this leap frogging of versions means a client cannot be certain that a given version range is going imply a sane API contract. We only push forward the moment of pain when we have to decouple ourselves from spec=api.

Again, I'm not sure I follow. The difficulty seems to be when one makes the assumption that external spec versions must be treated just like the OSGi version convention we're discussion. This is obviously not the case as the external specifications make their own rules for backward compatibility between versions.

I do not see any confusion what so ever with the multiple export other than the fact that it does not follow what has become OSGi convention. Imposing OSGi conventions on another specification seems a bit of hubris as well simultaneously unlikely to work and confuse the very user base we're trying to win over.

Further, I don't think we're talking about an explosion of versions which will make the multiple export of versions a meta data nightmare. There aren't more than a handful of these versions in any of the specifications we might interface with in our wildest fantasies.

Again, anything we do is going to be a compromise. Consequently, we have to choose where we're optimizing and what we consider to be worth chucking under a bus. As tool builders, I think we err too much on the side of keeping existing tools happy and not enough on keeping things simple from a user's perspective.

The simple fact is that if we don't use the existing versions of these specifications, then the people using them will be confused. These specifications are not under our control and even the JCP doesn't screw with versions of external specifications that are integrated into their specifications. For exactly the same reason - i.e. the don't want to sow confusion.

bjhargrave commented 14 years ago

Comment author: hal.hildebrand@oracle.com

I think we need to take into account there is a difference in compatibility rules for implementers and clients of a package. For this reason, in bnd, I calculate how a bundle uses a package. If it implements interfaces in this package it uses the implementation policy, otherwise it uses the client policy.

The implementation policy is [.,<major,minor], e.g. [1.1,1.1] The client policy is [.,<major+1>), e.g. [1.1,2)

I am afraid the difference between implementation and client policies make multiple exports not work as suggested unless an additional discriminator is used on the export clause so clients and implementations can use the real version and the mimicked version differently.

Again, I'm not sure that what we want to do is conserve features of any specific tool in this solution, as these are not part of the specification.

One more remark, we are entering a new world. Though JEE is using "versions" in many places, it never has specified package versions nor provides any guidelins of how things are supposed to be versioned. I do think OSGi can actually specify them without much peril because it is very unlikely that those packages are already versioned anyway.

Maybe we should start at version 100 in those cases to make it clear these packages are not aligned with their enclosing specification.

Again, I don't think it's the OSGi's job to fix things that aren't theirs to fix. The multiple export solution provides a simple way to fix the problem with no inventions or confusion. It's a way to coexist with the reality that we don't control everything.

I don't see that as a bad thing.

bjhargrave commented 14 years ago

Comment author: david.savage@paremus.com

Again, I'm not sure I follow. The difficulty seems to be when one makes the assumption that external spec versions must be treated just like the OSGi version convention we're discussion. This is obviously not the case as the external specifications make their own rules for backward compatibility between versions.

I certainly agree that we can't mandate to a third party how to version their product or specification. However I do think that when a third party integrates with OSGi the API is something different to the unit of deployment. The key issue is whether the third party chooses to conform to the recommended API compatibility policy.

My concern with 1.1vs2.0 is that we delegate one form of complexity for another - instead of having to know the API version we now need to know the API compatibility policy. Which ever we route we choose one will be implicit and the other will need to be documented on a Wiki somewhere. My concern for the 1.1&2.0 solution is I think we are taking a punt and hoping the external spec doesn't tie us in knots later.

I do not see any confusion what so ever with the multiple export other than the fact that it does not follow what has become OSGi convention. Imposing OSGi conventions on another specification seems a bit of hubris as well simultaneously unlikely to work and confuse the very user base we're trying to win over.

I'm not sure it is hubris, but I can see why it appears so and I can certainly appreciate that it will cause confusion when bringing this number of new users into this problem space.

Further, I don't think we're talking about an explosion of versions which will make the multiple export of versions a meta data nightmare.

The previous thought experiment is really showing that multiple versions are not a general solution...(thought followed on to next reply)

There aren't more than a handful of these versions in any of the specifications we might interface with in our wildest fantasies.

...though in practice as you say these nightmare scenarios are pretty unlikely. So we may take a punt on it and cross our fingers hoping that the various specs on which we depend make future decisions wrt to versioning numbers that don't cause the OSGi versions to become a tangled mess.

Again, anything we do is going to be a compromise. Consequently, we have to choose where we're optimizing and what we consider to be worth chucking under a bus.

Agreed, I think the trade off is between easy access and easy maintenance. API=SPEC versioning is easy for non OSGi users to use when they start but a pain to evolve over time.

As tool builders, I think we err too much on the side of keeping existing tools happy and not enough on keeping things simple from a user's perspective.

Hopefully in my comment on sigil (#8) I've shown that we are at least thinking about some of these problems and can deal with different versioning schemes. So for me at least this is more about "doing the right thing" vs keeping the tooling happy.

The simple fact is that if we don't use the existing versions of these specifications, then the people using them will be confused.

That is my concern too, hence the suggestion to add an annotation to give new users a clue, but the concern here seems to be that even this may not be enough.

I think we should make all attempts to follow the OSGi versioning scheme but if this is one leap to far then we need to be clear and hold up our hands and explain why we didn't follow our own recommendations - which is the obvious next question.

These specifications are not under our control and even the JCP doesn't screw > with versions of external specifications that are integrated into their specifications. For exactly the same reason - i.e. the don't want to sow confusion.

Right, though it's one thing to reference another spec in a spec document but another thing to compile code against an API.

bjhargrave commented 14 years ago

Comment author: hal.hildebrand@oracle.com

I certainly agree that we can't mandate to a third party how to version their product or specification. However I do think that when a third party integrates with OSGi the API is something different to the unit of deployment. The key issue is whether the third party chooses to conform to the recommended API compatibility policy.

But it doesn't. By definition. These versions predate any integration, and the OSGi committees producing these integrations have no authority to make these specifications conform to a recommendation (note, not a standard) of the OSGi. Further, I seriously doubt that is likely to happen any time in the future.

My concern with 1.1vs2.0 is that we delegate one form of complexity for another

  • instead of having to know the API version we now need to know the API compatibility policy.

This compatibility policy is explicitly defined by the external specification. Further, it is a well known policy that has - in the specific case of Java EE specifications - been in use for about a decade and is completely understood by those making use of the specifications.

Which ever we route we choose one will be implicit and the other will need to be documented on a Wiki somewhere.

In the case of my suggested solution, and the solution that SS has implemented, is a single note referring to the external specification as the authority of both version and compatibility. That becomes a blanket statement and not something we need to keep on repeating, nor does it take a lot of thought and head scratching trying to figure out what the correct version to import actually is.

My concern for the 1.1&2.0 solution is I think we are taking a punt and hoping the external spec doesn't tie us in knots later.

Again, I'm still wondering where these knots will show themselves. As far as I can tell, there are no knots, everything is quite crisply defined and trivially maintainable. No confusion exists on the import side.

There aren't more than a handful of these versions in any of the specifications we might interface with in our wildest fantasies.

...though in practice as you say these nightmare scenarios are pretty unlikely. So we may take a punt on it and cross our fingers hoping that the various specs on which we depend make future decisions wrt to versioning numbers that don't cause the OSGi versions to become a tangled mess.

The Java EE has been around for a decade. Change is only getting harder, not easier. Rampant change that would tie us in knots is strongly selected against by the consumers of the technology. Thus, I don't see where this scenario is even likely.

Agreed, I think the trade off is between easy access and easy maintenance. API=SPEC versioning is easy for non OSGi users to use when they start but a pain to evolve over time.

But the Java EE has been evolving over a decade. We're not talking about an academic thought experiment here. Yes, there's problems - not saying everything is roses. However, there isn't chaos and wide spread misunderstanding as to how these versions work and what versions their code should use.

The simple fact is that if we don't use the existing versions of these specifications, then the people using them will be confused.

That is my concern too, hence the suggestion to add an annotation to give new users a clue, but the concern here seems to be that even this may not be enough.

I think we should make all attempts to follow the OSGi versioning scheme but if this is one leap to far then we need to be clear and hold up our hands and explain why we didn't follow our own recommendations - which is the obvious next question.

This question seems simply and completely answered by stating that this is our compatibility model for integrating with external specifications that we don't control

These specifications are not under our control and even the JCP doesn't screw > with versions of external specifications that are integrated into their specifications. For exactly the same reason - i.e. the don't want to sow confusion.

Right, though it's one thing to reference another spec in a spec document but another thing to compile code against an API.

Granted. But again, we have plenty of experience with actually doing this out in the wild. Java EE isn't exactly an unpopular specification and appears to run most of the actual business logic out there in the very world that we would like to pull on board.

bjhargrave commented 14 years ago

Comment author: @pkriens

Tom: bnd has annotations to indicate that certain interfaces are intended to be implemented by clients and are therefore more stable. However, this is just a subtlety. The key issue is that implementers (however they're defined) and clients have different requirements.

bjhargrave commented 14 years ago

Comment author: david.savage@paremus.com

This compatibility policy is explicitly defined by the external specification. Further, it is a well known policy that has - in the specific case of Java EE specifications - been in use for about a decade and is completely understood by those making use of the specifications.

I understand your point and it's probably fine for those who've had their heads buried in this for all this time. But for new users coming into the Java world who might want to use JEE with OSGi this requires a lot of reverse engineering to find the various import ranges that specify compatibility. I think that either the OSGi alliance needs to state the versions that the various packages export or state the version compatibility policies that those packages use.

But the Java EE has been evolving over a decade. We're not talking about an academic thought experiment here.

Agreed, this does need to be a real world solution. I wonder if we need to back away from the actual method of encoding and return to the original point of this issue. That is to come up with a list of the JEE packages as they stand today and map out the version compatibility policies for these. This information is valid no matter how we choose to encode it. Once we've mapped out the space we should then apply a common rule to turn all this data into either version numbers or version import ranges.

As a suggestion (though feel free to shoot this down) this could be done using a neutral encoding format:

package.name: 1.0,2.0;3.0

Where ,' represent client backwards compatible version boundaries and ;'s represent client non backwards compatible version changes. This data can then be converted to what ever OSGi encoding we choose to use in the end.

The common rule should ideally turn this data into an OSGi encoding that is easiest for users to make rational decisions about the SPEC/API policy they're getting for a given import.

bjhargrave commented 14 years ago

Comment author: hal.hildebrand@oracle.com

I understand your point and it's probably fine for those who've had their heads buried in this for all this time. But for new users coming into the Java world who might want to use JEE with OSGi this requires a lot of reverse engineering to find the various import ranges that specify compatibility. I think that either the OSGi alliance needs to state the versions that the various packages export or state the version compatibility policies that those packages use.

Perhaps in theory, but in practice I don't see the issue. I would ask that those making these complexity arguments actually do just a minor bit of digging to make the case with data, rather than theory. Perhaps there are some specifications where this does not hold, but I believe all, if not the vast majority of Java EE specifications are all backwards compatible. Likewise with the JRE. So while I understand that one has the potential to create a vast space of complexity, in actual practice over the last decade throughout a huge industry of developers, this has simply not been true.

Also, I don't think we should be optimizing to the incredibly rare case where someone is new to Java and is simultaneously trying to learn EE and OSGi. The case to optimize for are the hundreds of thousands of developers who know Java EE and haven't a clue about OSGi. And I won't even mention the thousands of books out there explaining precisely the issues you're bringing up in the context of Java EE.

Agreed, this does need to be a real world solution. I wonder if we need to back away from the actual method of encoding and return to the original point of this issue. That is to come up with a list of the JEE packages as they stand today and map out the version compatibility policies for these. This information is valid no matter how we choose to encode it. Once we've mapped out the space we should then apply a common rule to turn all this data into either version numbers or version import ranges.

As a suggestion (though feel free to shoot this down) this could be done using a neutral encoding format:

package.name: 1.0,2.0;3.0

Where ,' represent client backwards compatible version boundaries and ;'s represent client non backwards compatible version changes. This data can then be converted to what ever OSGi encoding we choose to use in the end.

The common rule should ideally turn this data into an OSGi encoding that is easiest for users to make rational decisions about the SPEC/API policy they're getting for a given import.

But this mapping has already been done. It's called the Java EE platform spec, which details the versions of all the component API specifications. Further, it also details backward compatibility as well.

I'm just wondering what the point we're driving at. If we use the accepted versions for Java EE, then we literally have nothing to do as the compatibility versions and ranges are well known and documented. Further, given that most, if not every single Java EE API is backwards compatible, it would seem that we have the ideal world where the version the bundle uses at compile time is the only version that needs to be specified, even if new versions come about and are the versions present in the runtime, as they are exported as compatible versions with those required by these clients.

That is, there are no version ranges required to be used by the client.

Which means, I believe, it's butt simple and incredibly easy to explain.

bjhargrave commented 14 years ago

Comment author: david.savage@paremus.com

I understand your point and it's probably fine for those who've had their heads buried in this for all this time. But for new users coming into the Java world who might want to use JEE with OSGi this requires a lot of reverse engineering to find the various import ranges that specify compatibility. I think that either the OSGi alliance needs to state the versions that the various packages export or state the version compatibility policies that those packages use.

Perhaps in theory, but in practice I don't see the issue. I would ask that those making these complexity arguments actually do just a minor bit of digging to make the case with data, rather than theory.

Agreed this is really what I am looking for - data vs theory - I just don't have the data but I suspect that between us with all the experts here it shouldn't be too hard to make the statements, even if it's something as trivial as

javax.persistence.: 1.0, 2.0 javax.servlet.: 2.5,2.6,blah, etc

use wild cards, I'm definitely not suggesting we list every package in the java ee spec.

The common rule should ideally turn this data into an OSGi encoding that is easiest for users to make rational decisions about the SPEC/API policy they're getting for a given import.

But this mapping has already been done. It's called the Java EE platform spec, which details the versions of all the component API specifications. Further, it also details backward compatibility as well.

Right, but one of the big benefits of OSGi is that it is self describing, and one of the reasons we're all drowning in complexity is that in order to make any reasoned judgments about the world we have to go wading through 100's of pages of specifications to figure out whether upgrading from 1.0 to 2.0 of some uber specification is going to cause a piece of functionality that I compiled against to break.

Also the software industry is spending vast amounts of money making sure that specifications are backwards compatible. When at some points it would just be so much easier to just make a breaking change every so often. Done with the knowledge that if your clients followed some recommendations on how to handle API compatibility then the majority of the world would just sail on not having noticed until such time as they choose to upgrade.

I'm just wondering what the point we're driving at. If we use the accepted versions for Java EE, then we literally have nothing to do as the compatibility versions and ranges are well known and documented. Further, given that most, if not every single Java EE API is backwards compatible, it would seem that we have the ideal world where the version the bundle uses at compile time is the only version that needs to be specified, even if new versions come about and are the versions present in the runtime, as they are exported as compatible versions with those required by these clients.

That is, there are no version ranges required to be used by the client.

I agree that it all works now. But that's only one part of the picture. Though maybe it's the only one that matters...

The concern with open ranges is that they break at unexpected times in the future. The obvious retort to this is that QA/release procedures should capture these problems and you set a real max version range after identifying the problem (assuming the problem occurs in the flow of control covered by the testing). But a huge portion of these costs can be factored out if the guy writing the code can make a future prediction about what dependent version's the code he is currently developing will work against.

I'm not trying to suggest that we can magically fix all software release problems as versions are always going to be fuzzy match. You cannot ensure that random bugs don't slip in between releases that screw up this simplistic view of the world. But none the less a modular architecture can minimize the problems and thus cost of software development if people can make reasoned judgments about the versions they expect to be compatible.

Of course I will also accept that optimizing for the future is generally viewed as an antipattern. But I can just see the impending train wreck when people ask why they spent all this time and money migrating to a modular architecture when they still end up with NoSuchMethod and NoClassDefFound Errors because they couldn't express their modularity requirements in any sane way.

Final thought - referring back to the devil and the deep blue sea - if the cost of adding all this extra "value" is too high then we will never see any of this realized as it will be too hard for developers to get into.

I know I'm arguing both sides of the fence, but I just don't think it's black and white what the correct answer is.

bjhargrave commented 14 years ago

Comment author: hal.hildebrand@oracle.com

I agree that it all works now. But that's only one part of the picture. Though maybe it's the only one that matters...

I still don't understand why this doesn't work for all time. Again, any version which is backward compatible will export itself as all compatible versions. Consequently, any client which imports that version will be satisfied. It's actually far, far simpler than explaining version ranges, etc. There is nothing to explain to the client. They compile against and import a precise version, no ranges. Done. Works for all time. Even in the presence of breaking changes in future versions.

The concern with open ranges is that they break at unexpected times in the future. The obvious retort to this is that QA/release procedures should capture these problems and you set a real max version range after identifying the problem (assuming the problem occurs in the flow of control covered by the testing). But a huge portion of these costs can be factored out if the guy writing the code can make a future prediction about what dependent version's the code he is currently developing will work against.

Again, that is exactly what is happening. The coder simply indicates "hey, I compiled against version X and I import exactly version X" and everything just works. The coder doesn't have to read minds, or rely on anyone following any versioning scheme. They just indicate what they did and that's it.

I'm not trying to suggest that we can magically fix all software release problems as versions are always going to be fuzzy match. You cannot ensure that random bugs don't slip in between releases that screw up this simplistic view of the world. But none the less a modular architecture can minimize the problems and thus cost of software development if people can make reasoned judgments about the versions they expect to be compatible.

The only judgment they need to make is the version they currently compile against. The notion of what versions are compatible and what are not is something that they do not have to concern themselves with. Rather than making this a distributed problem where all clients must read the crystal ball and predict the future, they simply indicate what they know for a fact - i.e. what they compiled against.

Of course I will also accept that optimizing for the future is generally viewed as an antipattern. But I can just see the impending train wreck when people ask why they spent all this time and money migrating to a modular architecture when they still end up with NoSuchMethod and NoClassDefFound Errors because they couldn't express their modularity requirements in any sane way.

I'll just keep repeating the same point ;) There is no need for anyone to predict the future. Specs "in the future" which are backwards compatible will export as versions these clients are hardwired to. No prediction necessary and completely future proof.

Final thought - referring back to the devil and the deep blue sea - if the cost of adding all this extra "value" is too high then we will never see any of this realized as it will be too hard for developers to get into.

I know I'm arguing both sides of the fence, but I just don't think it's black and white what the correct answer is.

I think it's pretty straight forward, now that I've argued it out. On the one hand we have a solution which is:

a) based on existing OSGi features b) tested in anger, in real life development scenarios c) does not require version ranges d) is future proof e) is trivially explained

The other suggested solutions do not appear to even come close to this, requiring new inventions, a translation table, and are incredibly complicated to explain to people.

bjhargrave commented 13 years ago

Comment author: Tim Diekmann <tdiekman@tibco.com>

assign to Peter to point to wiki with package -> version mapping

bjhargrave commented 12 years ago

Comment author: @pkriens

Bug BZ#2113 has been marked as a duplicate of this bug.

bjhargrave commented 12 years ago

Comment author: @pkriens

Working now in RFC 180, see https://docs.google.com/spreadsheet/ccc?key=0AmdDbjzRBRrBdFdPa0hIVktQSnBwSS1WemRkeTZXZUE

bjhargrave commented 12 years ago

Comment author: Michael Keith <michael.keith@oracle.com>

Nice job, Peter.

Could you please change oracle.bnd to Glassfish?

One suggestion would be to also include the version of the spec. The JSR number is included, but having the spec version as part of the entry is what people care more about, I think.

bjhargrave commented 12 years ago

Comment author: Graham Charters <charters@uk.ibm.com>

Could you also change aries.bnd to websphere.bnd.

Regarding spec version, I agree with Mike and think it's essential we include this for the discussion. In fact, I think we need to list the relevant spec versions (plural) for each technology as a lot of the differences come down to chosen baselines.

bjhargrave commented 12 years ago

Comment author: Graham Charters <charters@uk.ibm.com>

A few edit suggestions:

javax.wsdl 1.2 comes from jsr110 which defined spec version 1.2. com.sun.faces is not spec API and should be removed. org.uddi and org.uddi4j are from open source projects and should be removed.