Open rmannibucau opened 1 year ago
- The related package is
build
but "build" means nothing for JakartaEE which is 99% about "runtime" in terms of API and features (only exception is JPA which references an annotation processor but explicitly excludes the build part of the spec intentionally in its wording since there is no ant/gradle/maven/javac coverage - not a goal - of the spec) so as of today it is a dead package you can only use relying on a vendor so no point to have it under jakartaee umbrella
Not true - see Jakarta EE Core Profile.
Lite API is not offline build friendly since it requires a specific runtime phase so it must happen after compilation which also means it does not need a build API but can use a runtime API by design (so something failed in the design)
Could you be more specific? Compilation is not a runtime phase.
Most of the API duplicates Extension API and technically you can run Extension at build time since years, the only limitation would be Extension with runtime impacts...
Basically any portable extension that collects some information and make use of this information at runtime is incompatible. And from my experience, minimum of portable extensions are stateless.
There is no performance gain either after the JVM JIT your code or using graalvm (only notable diff can be to exclude a few classes but it is also doable with Extension)
You should be more specific and provide some serious measurement. In any case, it does improve the application startup time significantly because the discovery and dependency resolution happens at build time. We did some experiments and measurements during the early days of Quarkus. And that was the reason why we started to work on a new build-time DI solution and did no reuse Weld.
Not true - see Jakarta EE Core Profile.
So I must understand "not true, you are right"? This link does not help at all any user since it writes "we want to do build time" but it does not define anything to enable it so it does not exists in the spec.
Could you be more specific? Compilation is not a runtime phase.
Right but the API is not compilation friendly since it relies on Class
heavily - even if not 100% - so it just does not reach its goal.
Basically any portable extension that collects some information and make use of this information at runtime is incompatible. And from my experience, minimum of portable extensions are stateless.
Nop, you can get a stateful portable extension, while the impl can dump its state. The main diff working from Extension
instead of duplicating all its API (lite) is that the API difference is very minimal (mainly a marker - either BuildExtension extends Extension
or an annotation as mentionned - and a state dump you can reload/inject in the extension - can use multiple mecanism from json dump to serialization, this is the missing piece the spec should define but at the end it is 3 API points (2 methods 1 annot for ex) and not x2 the whole API).
it does improve the application startup time significantly because the discovery and dependency resolution happens at build time
Please provide numbers comparing extension
run and bean capture at build time versus the new API with bean capture at build time, the gain is at the maximum of 0.
This is how we integrate OpenWebBeans in GraalVM thanks to Geronimo Arthur and we literally run extensions - configurable while there is not auto filter for now to avoid to run actual runtime code, we disable startup events to not trigger runtime code, we capture the beans and their proxy code and then we native-image
it with gralvm. Runtime does not run any extension - until you configure it explicitly to do it - and you get the same perf boost than quarkus but using plain standard CDI, no need of build API and no ambiguity of the API which is unusable cause the environment it runs in is undefined and partly inconsistent.
We did some experiments and measurements during the early days of Quarkus. And that was the reason why we started to work on a new build-time DI solution and did no reuse Weld.
I know but quarkus is a way to achieve it creating a new API, using CDI 2/3 API also enables it and with minimal enhancements (the ones mentionned) you get the same benefit + enable to reuse most of the ecosystem instead of forcing all extension writers to reimplement it twice practically - yes lite makes the ecosystem split in 2 so we now have an umbrella project with 2 subprojects in practise, can't last long IMHO. Since lite is not usable by end user as of today I'd just drop it asap as a requirement and rework the original goal with something not breaking CDI itself.
Not true - see Jakarta EE Core Profile.
So I must understand "not true, you are right"? This link does not help at all any user since it writes "we want to do build time" but it does not define anything to enable it so it does not exists in the spec.
"The Core Profile is targeted at developers of modern cloud applications with focus on a minimal set of APIs that support microservices with a focus on enabling implementations that make use of ahead of time complitation build to minimize startup time and memory footprint."
That's IMO enough to justify a build-time oriented API in a spec.
Could you be more specific? Compilation is not a runtime phase.
Right but the API is not compilation friendly since it relies on
Class
heavily - even if not 100% - so it just does not reach its goal.
I don't understand. What does "compilation friendly" mean?
Basically any portable extension that collects some information and make use of this information at runtime is incompatible. And from my experience, minimum of portable extensions are stateless.
Nop, you can get a stateful portable extension, while the impl can dump its state.
And this part is not exactly easy. Many existing extensions would be broken anyway because their state does not implement Serializable
and JSON is very inpractical for storing anything but simple data.
The main diff working from
Extension
instead of duplicating all its API (lite) is that the API difference is very minimal (mainly a marker - eitherBuildExtension extends Extension
or an annotation as mentionned - and a state dump you can reload/inject in the extension - can use multiple mecanism from json dump to serialization, this is the missing piece the spec should define but at the end it is 3 API points (2 methods 1 annot for ex) and not x2 the whole API).it does improve the application startup time significantly because the discovery and dependency resolution happens at build time
Please provide numbers comparing
extension
run and bean capture at build time versus the new API with bean capture at build time, the gain is at the maximum of 0. This is how we integrate OpenWebBeans in GraalVM
You're confusing a build time technology (where a JVM instance is the runtime environment of an app) and compilation to a native image.
thanks to Geronimo Arthur and we literally run extensions - configurable while there is not auto filter for now to avoid to run actual runtime code, we disable startup events to not trigger runtime code, we capture the beans and their proxy code and then we
native-image
it with gralvm. Runtime does not run any extension - until you configure it explicitly to do it - and you get the same perf boost than quarkus but using plain standard CDI, no need of build API and no ambiguity of the API which is unusable cause the environment it runs in is undefined and partly inconsistent.We did some experiments and measurements during the early days of Quarkus. And that was the reason why we started to work on a new build-time DI solution and did no reuse Weld.
I know but quarkus is a way to achieve it creating a new API, using CDI 2/3 API also enables it and with minimal enhancements (the ones mentionned) you get the same benefit + enable to reuse most of the ecosystem instead of forcing all extension writers to reimplement it twice practically - yes lite makes the ecosystem split in 2 so we now have an umbrella project with 2 subprojects in practise, can't last long IMHO. Since lite is not usable by end user as of today I'd just drop it asap as a requirement and rework the original goal with something not breaking CDI itself.
CDI lite has a lot of pitfalls in the way it is designed
I certainly hope I learn more about them.
and consumers/end users can't rely on any part of this specification subpart
How come?
so let's make it optional for now for any implementation.
The only thing that's actually new in CDI 4.0 is the built compatible extension API. Are you suggesting to make that optional? I'm game -- I don't know why people care so much about extensions, we could have saved so much time simply by ignoring that part and insisting that Lite has no extension API whatsoever.
Pitfalls to solve before thinking about making it a profile or anything concrete in the implementation:
- The related package is
build
but "build" means nothing for JakartaEE which is 99% about "runtime" in terms of API and features (only exception is JPA which references an annotation processor but explicitly excludes the build part of the spec intentionally in its wording since there is no ant/gradle/maven/javac coverage - not a goal - of the spec) so as of today it is a dead package you can only use relying on a vendor so no point to have it under jakartaee umbrella
Are you implying how the API works based on its package name? If so, you should be fair. The package name is build.compatible
. CDI Full implementations are expected to implement it at runtime, which is perfectly possible as proven by Weld.
- Core is based on Lite even if technically there is no real reason core inherits the build package (it is even the opposite which would make sense IMHO, ie to enable a standard application to move part of its logic at build time but ultimately it is two transversal concern with a light overlap like contexts)
The build.compatible
package does not allow moving parts of application logic to build time. That has never been the point. The major goal of CDI Lite was to allow bean discovery and dependency wiring to move to build time. That either requires dropping extensions from CDI Lite, or creating a build compatible extension API. (I'd like to specifically point out that running extensions is a major part of bean discovery.) We did the latter.
The real reason why Full includes the entirety of Lite is coherence. Applications, libraries and extensions written against Lite should work without a change in Full. We had long and fierce discussions about the possibility of Lite-only extensions. That would just create 2 separate CDI worlds that couldn't be bridged.
- Lite API is not offline build friendly since it requires a specific runtime phase so it must happen after compilation which also means it does not need a build API but can use a runtime API by design (so something failed in the design)
There is no requirement on a "specific runtime phase" at all. It is perfectly possible to implement CDI Lite in a way that bean discovery and dependency wiring happens solely at build time. There are 2 independent projects that do that and IIUC, they are both relatively close to passing the CDI Lite part of the CDI TCK.
In later replies, you allude to the usage of Class
objects in the API. That does indeed require loading classes to the JVM that runs the extension, but the class objects are only used to transfer the class name and nothing else. Unless the extension itself does something with the class, it doesn't even have to get initialized. I'm pretty sure the API allows implementations to call getName()
on the Class
object as the first thing and then proceed just with that.
We originally had a String
-based variant of the API, but it turned out that using Class
literals is just fine. And if the API was based on String
class names, there would be no way to prevent anyone from writing MyClass.class.getName()
anyway. In fact, I think keeping the String
-based API in the @Discovery
phase was a mistake; we should have moved that to Class
as well.
- Most of the API duplicates
Extension
API and technically you can runExtension
at build time since years, the only limitation would beExtension
with runtime impacts so it can be as efficient to just mark extensions with@BuildFriendly
- which can even be vendor specific since not loadable annotations are ignored by the JVM.
This has been refuted many times. It is not possible to run Portable Extensions at build time because:
- There is no performance gain either after the JVM JIT your code or using graalvm (only notable diff can be to exclude a few classes but it is also doable with
Extension
)
Performance is a property of implementation, not specification. You may be fine with spending time on bean discovery and dependency wiring during application startup, but that doesn't mean everyone else is too.
"The Core Profile is targeted at developers of modern cloud applications with focus on a minimal set of APIs that support > microservices with a focus on enabling implementations that make use of ahead of time complitation build to minimize startup >> time and memory footprint."
That's IMO enough to justify a build-time oriented API in a spec.
No, it explains the intent, I agree, but it does not enable to use it. What is the API to use it? What is the ToolProvider
user can rely on to actually build? This is out of scope of Jakarta so does not sit well in the spec. So at the end, all the build time API is there, defined but unusable for end users.
I don't understand. What does "compilation friendly" mean?
Think you mixed (like in speaking vs writing) compilation and build, compilation means javac
, if you rely on Class
you are doomed there for several use cases - this is why annotation processors don't for ex.
And this part is not exactly easy. Many existing extensions would be broken anyway because their state does not implement Serializable and JSON is very inpractical for storing anything but simple data.
Well you can store anything in JSON while the extension is responsible of its serialization (BuildFriendlyExtension<S> extends Extension { S toState(); void fromState(S); }
). Agree it is not easy but it is the same as copying the whole extension API, with a new design (explicit event driven vs annotationed methods/listeners) so at the end the build package does not bring anything to end users.
The only thing that's actually new in CDI 4.0 is the built compatible extension API. Are you suggesting to make that optional? > I'm game -- I don't know why people care so much about extensions, we could have saved so much time simply by ignoring > that part and insisting that Lite has no extension API whatsoever.
People care about extensions because it is what makes CDI so strong and not just a shortcuts for setters. If you make lite without any extension (ie drop build package) I can envision a real use case and could agree on that, would be a good compromise.
Are you implying how the API works based on its package name? If so, you should be fair. The package name is build.compatible. CDI Full implementations are expected to implement it at runtime, which is perfectly possible as proven by Weld.
Then you just duplicated the extension API, so does not make sense which is why part of the CDI community asked to not make this package part of the official JakartaEE release.
The real reason why Full includes the entirety of Lite is coherence. Applications, libraries and extensions written against Lite should work without a change in Full. We had long and fierce discussions about the possibility of Lite-only extensions. That would just create 2 separate CDI worlds that couldn't be bridged.
I understood but you create the inconsistency the other way, there are 2 parallel modes so whatever one onforces the other, it is inconsistent.
This has been refuted many times. It is not possible to run Portable Extensions at build time because
Please stop writing that, it works and is used. It has some limitations and context but works as well as the new API.
Performance is a property of implementation, not specification.
So no need of a new API?
You may be fine with spending time on bean discovery and dependency wiring during application startup, but that doesn't mean everyone else is too.
Once again, as written it is NOT the case with Extension
API.
This has been refuted many times. It is not possible to run Portable Extensions at build time because
Please stop writing that, it works and is used. It has some limitations and context but works as well as the new API.
@rmannibucau Please show me an example of (A) a build-time CDI implementation and (B) portable extensions that do work with this implementation. Note that I'm not talking about GraalVM native image compilation which is a different use case.
@mkouba graalvm is more or less the same case, only difference is graalvm enforces you to become build time for part of the process whereas other cases (java) do not require it. Here is an implementation which works like that (with the current extension limitation and depending the app it can require some tuning/extension filtering and event disablement but all comes as toggle in openwebbeans): https://github.com/apache/geronimo-arthur/blob/master/knights/openwebbeans-knight/src/main/java/org/apache/geronimo/arthur/knight/openwebbeans/OpenWebBeansExtension.java. The impl is usable without native image tuning the maven plugin and owb.properties.
So yes extension API needs some extension (very light as you saw) and no build package does not solve the goal it had so let's make it an optional part of CDI and work toward a more global and simple solution on the long term :pray: .
The only thing that's actually new in CDI 4.0 is the built compatible extension API. Are you suggesting to make that optional? > I'm game -- I don't know why people care so much about extensions, we could have saved so much time simply by ignoring > that part and insisting that Lite has no extension API whatsoever.
People care about extensions because it is what makes CDI so strong and not just a shortcuts for setters.
Well exactly! There you have it. Lite needs an extension API.
If you make lite without any extension (ie drop build package) I can envision a real use case and could agree on that, would be a good compromise.
See above.
Are you implying how the API works based on its package name? If so, you should be fair. The package name is build.compatible. CDI Full implementations are expected to implement it at runtime, which is perfectly possible as proven by Weld.
Then you just duplicated the extension API, so does not make sense
It does make sense, because the "duplicate" has some important properties the original doesn't.
which is why part of the CDI community asked to not make this package part of the official JakartaEE release.
Agree to disagree.
The real reason why Full includes the entirety of Lite is coherence. Applications, libraries and extensions written against Lite should work without a change in Full. We had long and fierce discussions about the possibility of Lite-only extensions. That would just create 2 separate CDI worlds that couldn't be bridged.
I understood but you create the inconsistency the other way, there are 2 parallel modes so whatever one onforces the other, it is inconsistent.
Except they are not parallel. One is a strict subset of the other.
This has been refuted many times. It is not possible to run Portable Extensions at build time because
Please stop writing that, it works and is used. It has some limitations and context but works as well as the new API.
It does not work in general. The approach you present very specifically relies on the ability to snapshot and restore the JVM state (especially the managed heap). This is hardly common on stock JVMs, and even that is not enough for extensions that start threads or open files or sockets. Hopefully, those are rare :-)
Your other suggestion to require extension authors to correctly serialize their state is a recipe for disaster frankly (how do you serialize a client proxy to JSON?). The only sane way to write such "build friendly" extensions would be to push most of the state out of the extension instance into synthetic beans and observers. The build compatible extension API makes that the only option.
Performance is a property of implementation, not specification.
So no need of a new API?
We certainly do not need a new API for performance. We certainly do need a new API for enabling other architectures of CDI implementations.
Well exactly! There you have it. Lite needs an extension API.
at runtime, at build time people always do what they want, but they care about dynamism at runtime for a lot of apps. But point is lite can leverage runtime API a lot instead of forking it in style.
It does make sense, because the "duplicate" has some important properties the original doesn't.
Well didn't find much in terms of feature for end users. I can see some impl ones but it is leaking an impl in the spec, can't work long otherwise you will end up with a dozens of concurrent API.
Except they are not parallel. One is a strict subset of the other.
Not really, both can work without the other so they are parallel/transversal.
It does not work in general. The approach you present very specifically relies on the ability to snapshot and restore the JVM > state (especially the managed heap). This is hardly common on stock JVMs, and even that is not enough for extensions that > start threads or open files or sockets. Hopefully, those are rare :-)
it does as soon as you enable extension to dump/load their state which are 2 methods and not duplicating the full extension API, it got proven multiple times. It even got implemented. Technically if it does not work then lite can't work, you can demonstrate it if you want.
how do you serialize a client proxy to JSON?
Depends the part you need but can be as simple as a List<String>
or bytecode dump/reuse.
The build compatible extension API makes that the only option.
Technically it is 1-1, just an API you prefer maybe but does not enable any use case.
We certainly do need a new API for enabling other architectures of CDI implementations.
Means you want an implementation to surface in the spec, if so, please make it an implementation specific and not surface in the spec, I have a ton of wish coming from implementations but I don't think you would want most of it cause it is API and library specific at the end.
So please, once again, let's make build package optional, enhance extension API to offer a build time API and enhance the extension API to optimize part of it (bulk load instead of per type event for ex, it literally makes the complexity from o(n)
to o(1)
and would benefit all extensions use cases)
There is so much wrong claims in this thread that I'm not able to catch up even. For example this:
The real reason why Full includes the entirety of Lite is coherence. Applications, libraries and extensions written against Lite should work without a change in Full. We had long and fierce discussions about the possibility of Lite-only extensions. That would just create 2 separate CDI worlds that couldn't be bridged.
Truth is:
Except they are not parallel. One is a strict subset of the other.
Not really, both can work without the other so they are parallel/transversal.
They could be parallel. They would be if we made the BCExtensions API Lite-only, which we did not.
We certainly do need a new API for enabling other architectures of CDI implementations.
Means you want an implementation to surface in the spec
Quite the opposite. I want the spec to not prevent certain implementation architectures.
Quite the opposite. I want the spec to not prevent certain implementation architectures.
it does, but I don't want the opposite, ie the spec let surface all impl details which is lite api.
The only thing that's actually new in CDI 4.0 is the built compatible extension API. Are you suggesting to make that optional? > I'm game -- I don't know why people care so much about extensions, we could have saved so much time simply by ignoring > that part and insisting that Lite has no extension API whatsoever.
People care about extensions because it is what makes CDI so strong and not just a shortcuts for setters.
Well exactly! There you have it. Lite needs an extension API.
But creating a new framework to program Extensions but trashing the existing variant is maybe not a clever idea because JakartaEE is ALL about investment security in existing code!
Basically CDI-lite is NOT CDI but a completely new specification with some very small overlap (mostly JSR-330 with a very little bit of 299 basically)
Your other suggestion to require extension authors to correctly serialize their state is a recipe for disaster frankly (how do you serialize a client proxy to JSON?)
Wow, you really think this is really a problem? Good news for you: no, it's not.
We've solved this when we wrote CDI-10 already for the JSF use case and clustering. The mechanism is BeanManager#getPassivationCapableBean()
in conjunction with jakarta.enterprise.inject.spi.PassivationCapable
graalvm is more or less the same case, only difference is graalvm enforces you to become build time for part of the process whereas other cases (java) do not require it.
@rmannibucau Not really, graalvm native image is a very specific use case because (A) the target runtime is not JVM and (B) the graalvm native image tool takes care of state serialization/deserialization, i.e. makes a snapshot of the JVM state when building the native image.
Your other suggestion to require extension authors to correctly serialize their state is a recipe for disaster frankly (how do you serialize a client proxy to JSON?)
Wow, you really think this is really a problem? Good news for you: no, it's not.
We've solved this when we wrote CDI-10 already for the JSF use case and clustering. The mechanism is
BeanManager#getPassivationCapableBean()
in conjunction withjakarta.enterprise.inject.spi.PassivationCapable
In other words, it's not a generic solution at all.
@mkouba graalvm native-image is 100% about converting bytecode in native format (so to caricature), everything done around (quarkus, arthur, native image agent, ...) is about precomputing at build time (or not) the runtime and just serving it as this at runtime (AOT), this applies to any process including a plain java one. This process is 100% doable with the extension API as soon as you add the dump/load API as mentionned or you assume they are stateless as you mentionned but we don't need any new discovering/enhancing/... API in any cases, we just need a write and reload API. The fact to not create a new API enables the ecosystem to be consistent and unique, the build API created a new framework and ecosystem inconsistent with the existing one as soon as you are not redhat or IBM (no offence, it is more than you own 100% of the stack and write it yourself), it means there is no more diversity in the ecosystem so no more abstraction in practise so no more need of a spec at eclipse jakarta if we keep it like that, don't think it was the intent but it is the direct implication of current spec. Since it is not usable by users and quite easy to fix let's just do it before it is too late.
In other words, it's not a generic solution at all.
You were talking about client proxies. For those the additional rules of 6.6.3 Passivation capable dependencies
apply, especially all beans with normal scope are passivation capable dependencies
. Those requirements can be met, regardless whether the proxy got created at runtime or at buildtime.
Java EE/Jakarta EE in general has historically favored profiles over optional features. The reason for this is more deterministic compatibility and portability across certified implementations. If the project decides to go the route of optional features, I suggest getting feedback from the platform project and the specification committee. Profiles also generally require some platform level alignment, so seeking input before really finalizing decisions would be advisable.
@m-reza-rahman I'd say profiles are quite different and not that adapted there, they enable to flip features but, in EE ecosystem, should stay a pile/stack. Here it is literally building two concurrent frameworks in parallel which diverge/fork so there is no real hope jaxrs can embrace both at some point until it implements both so it means the 20+ specs relying on CDI will need to integrate twice with CDI.
Java EE/Jakarta EE in general has historically favored profiles over optional features.
Did it really? Afair profiles only got introduced in JavaEE 6 (2010). Optional features otoh have been around as early as the first servlet specification. Another good example is XA support in JDO and later JPA. That was 2002-ish?
I don't see that we would go the route of optional as that has been explicitly called out as no longer desired by the spec committee.
I don't see that we would go the route of optional as that has been explicitly called out as no longer desired by the spec committee.
But got broken in fresh specs just a few weeks later... Even in the CDI specification. CDI lite removes a lot features from CDI ONLY if you are running in CDI-lite mode. e.g. bean-discovery-mode ALL which is only available in EE servers. What else than "optional" is that? Can you explain this @starksm64 ?
@starksm64 it is fine, ticket can be to "drop" instead of making it optional but as of today spec is 1. inconsistent and 2. unusable due to that so it must be fixed and we are sure there is no adopter as of today since 2 so let's act :).
I suggest that a more forward looking pursuit is exploring how to supersede and drop the older portable extensions API. The ability to author extensions is a key value proposition to CDI. It would be good to see if there can be one API to do this instead of two.
I suggest that a more forward looking pursuit is exploring how to supersede and drop the older portable extensions API. The ability to author extensions are a key value proposition to CDI. It would be good to see if there can be one API to do this instead of two.
This is already possible via BuildCompatibleExtension
API - that is usable in both, CDI Lite and Full (you can try that with Weld) and there is even an annotation to allow special handling if both variants of an extension exist.
The existing jakarta.enterprise.inject.spi.Extension
approach is retained because removing that would be very invasive and breaking to start with. Also, it won't hurt to have them side by side to make sure BCE covers the full scope of functionality that's needed for Lite.
Last but not least, classic extensions might still choose (or need? Just guessing here...) to support extra functionality that's only specific to Full.
Does it make sense to document the general positioning of the two APIs further, with a view towards possibly deprecating the older API at a future point?
Does it make sense to document the general positioning of the two APIs further, with a view towards possibly deprecating the older API at a future point?
I don't think the view is worth capturing as we cannot be sure when/if that happens and given that it would force existing extensions to rewrite, I don't see that happening any time soon. To me personally, it makes more sense to see the BCE API mature and with that we can eventually come to marking the original one as deprecated if that's something that's requested. Otherwise there is no issue for both to coexist and for any runtime impl to meet such requirement (Ladislav even had an impl-independent draft of how to make BCE execute at runtime via PE if you're interested; you'd need to browse the mailing list for that tho).
In theory, you can either be rigorous and write everything via BCE because you are going to migrate your app every other Monday to a different platform that may use either Lite or Full. Alternatively, you can choose the model that fits you based on whether you are running in an environment supporting Lite or Full. The compatibility is there but which one you choose to pick is up to you and up to what's easier to manage. In practice I doubt there are many applications that would commonly swap between, say, build time approach and runtime approach - or if they were, they'd have bigger hurdles to overcome in the first place.
In other words, I don't see any urgent need to consolidate the two models so long as there is a one way portability (from Lite to Full) between them.
Just a minor point of clarification - the extensions API is still mostly geared towards plug-in developers instead of business application developers, correct? So basically it’s a matter of a plug-in developer choosing to target something like Payara only, something like Quarkus only or ideally both?
In the longer term, I would like to see us settle in a place where a plug-in writer can use just one API to target all environments, possibly introducing conditional handling within the same plug-in if truly required. Is that the current design intent of BCE?
Just a minor point of clarification - the extensions API is still mostly geared towards plug-in developers instead of business application developers, correct?
That is my understanding, yes. Application developers can use the API as well, but it is certainly geared towards framework integrators.
So basically it’s a matter of a plug-in developer choosing to target something like Payara only, something like Quarkus only or ideally both?
The intent is that you can either target Full only, or you can target both. The option to target "Quarkus only" is not present.
In the longer term, I would like to see us settle in a place where a plug-in writer can use just one API to target all environments, possibly introducing conditional handling within the same plug-in if truly required. Is that the current design intent of BCE?
Exactly.
Hi Reza!
I suggest that a more forward looking pursuit is exploring how to supersede and drop the older portable extensions API. The ability to author extensions is a key value proposition to CDI. It would be good to see if there can be one API to do this instead of two.
There are a few things to consider:
1st: from a functionality pov: You can do all the things of the BuildCompatibleExtension also with a classic CDI Extension. But not the other way around. BuildCompatibleExtensions are much more restricted. For example it is not possible to configure the behaviour and apply Interceptors/Alternatives etc based on configuration on different servers. That's important for staging concepts. There is also another big problem in big modular projects: if you only have build time modification, then you'd need to crack open dependenies, modify them and rebundle them. Lot's of fun really - NOT :(
2nd: from a container pov: It is possible to implement all the BuildCompatibleExtension feature via a standard CDI Extension. And if someone wants to use build time handling then this is also perfectly possible right now. Helidon, Arthur, Meecrowave, etc ALL run on GraalVM quite nicely since many years! All those - and also Weld! - then use proprietary ways to enrich the classes with information, e.g. upfront generating proxies and stuff. But they are ALL proprietary anyway, even when using BuildCompatibleExtensions.
3rd: That means if you use the BuildCompatibleExtension system to enrich your project at build time, then it will ONLY run on that very targetted container! If you build for Quarkus, you will ONLY be able to run it on Quarkus, but not on WebLogic, TomEE, Glassfish or whatever other server. You are bound to the very vendor you compiled it for. That's imo not very EE-ish, isn't? That doesn't make that big of a difference for simple apps. But it might be a problem if you have 3rd party libs or reusable parts in bigger companies.
4th: no, CDI Extensions are not only for system integrators. I've seen tons of CDI Extensions written by and for customers. It might not be random Joe who uses it, but it's really used a lot, well established and would be a pity to loose it.
I suggest that a more forward looking pursuit is exploring how to supersede and drop the older portable extensions API
We have a choice:
do you really want to go the new spec? if so please create a new spec and don't break CDI, also think it means breaking all other specs so you just break Jakarta with that. Last: you do NOT enable any use case for end users since everything is already possible without the new extension API as seen.
1st: from a functionality pov: You can do all the things of the BuildCompatibleExtension also with a classic CDI Extension. But not the other way around. BuildCompatibleExtensions are much more restricted. For example it is not possible to configure the behaviour and apply Interceptors/Alternatives etc based on configuration on different servers.
There are restrictions due to the very nature of build-time compatibility (e.g. can't read runtime configuration directly in the extension, although there are way to engineer around that), and there are holes that can be filled (e.g. the API does not support transforming beans or injection points yet, but nothing prevents adding that in the future).
There is also another big problem in big modular projects: if you only have build time modification, then you'd need to crack open dependenies, modify them and rebundle them. Lot's of fun really - NOT :(
There's exactly nothing in the specification that requires modifying and rebundling dependencies. Zero. Zip. Zilch. Nada. I'm not familiar enough with ODI to speak about that, but ArC doesn't do that and doesn't need to do that.
2nd: from a container pov: It is possible to implement all the BuildCompatibleExtension feature via a standard CDI Extension.
Intentionally. So that it's trivial for existing CDI implementations to add support for the API.
And if someone wants to use build time handling then this is also perfectly possible right now. Helidon, Arthur, Meecrowave, etc ALL run on GraalVM quite nicely since many years!
Don't conflate build time support with GraalVM Native Image support. They are not the same thing.
If we wanted to compile Weld to native image, we would have done that. I'll help myself with an analogy: if Weld is the Guice of CDI, we wanted to build Dagger. CDI 3.0 and before didn't allow that.
3rd: That means if you use the BuildCompatibleExtension system to enrich your project at build time, then it will ONLY run on that very targetted container!
Incorrect.
If you build for Quarkus, you will ONLY be able to run it on Quarkus, but not on WebLogic, TomEE, Glassfish or whatever other server.
Incorrect.
You are bound to the very vendor you compiled it for.
Incorrect.
That's imo not very EE-ish, isn't? That doesn't make that big of a difference for simple apps. But it might be a problem if you have 3rd party libs or reusable parts in bigger companies.
Fortunately enough, reusable libraries remain reusable.
This page https://jakarta.ee/connect/ includes a link to the Jakarta EE Specifications Calendar. I hereby invite anyone and everyone to join any of the CDI calls and I'll be happy to explain how ArC works, including demos and details, as well as guided code walkthrough if anyone is interested.
I also hereby refuse to continue engaging in discussions that are based on unverified assumptions presented as facts. Thanks for understanding.
This page https://jakarta.ee/connect/ includes a link to the Jakarta EE Specifications Calendar. I hereby invite anyone and everyone to join any of the CDI calls and I'll be happy to explain how ArC works, including demos and details, as well as guided code walkthrough if anyone is interested.
I also hereby refuse to continue engaging in discussions that are based on unverified assumptions presented as facts. Thanks for understanding.
+100 to both of these points I am also present at virtually all of those meetings (and have been present for the duration of CDI 4 development) and will gladly provide similar insight.
There's exactly nothing in the specification that requires modifying and rebundling dependencies. Zero. Zip. Zilch. Nada. I'm not familiar enough with ODI to speak about that, but ArC doesn't do that and doesn't need to do that.
Right but real life ;). In practise build time has enough constraints to rarely work. Quarkus is a great proof of what Mark stated and even if EE can try to enable that it does not give any guarantee nor validation (which is a part of the design of EE) so this part is just empty and therefore Mark statement is very accurate from an user standpoint. Implementing the validations is possible but requires heavy processes which are not that friendly for modern development (do you want to get back to WAS/Weblo time?).
Intentionally. So that it's trivial for existing CDI implementations to add support for the API.
The opposite actually ;). Once again, both API are concurrent so you can always try to build one of top of the other but as of today runtime API is a superset of build API so you can implement build API with runtime API but not the opposite and at the end you get back to this particular ticket: new unsupported and build related API is not needed at all as you mention.
Don't conflate build time support with GraalVM Native Image support. They are not the same thing.
Don't think it was the point, graalvm just implies build time so is an example of the statement.
3rd: That means if you use the BuildCompatibleExtension system to enrich your project at build time, then it will ONLY run on that very targetted container! Incorrect.
Guess you are both as right as wrong: it is undefined and does not exist in the spec so it can or not be as of today (literally, build time extensions are not a spec thing as of today, they are mentionned but not spec-ed to be implementable).
If you build for Quarkus, you will ONLY be able to run it on Quarkus, but not on WebLogic, TomEE, Glassfish or whatever other server. Incorrect.
It is since quarkus does not support CDI and generated code is Arc specific so not reusable. However the code can (rarely) run on other servers if it does not use some specific features (inject-less qualifier for ex) and generated code (which can break other things like bean resolution).
I hereby invite anyone and everyone to join any of the CDI calls
Please use an user friendly media == github for any discussions otherwise what is done privately does not exist for the community generally speaking (not specific to jakarta, more a general way to run OSS projects). The lists have the drawback to require some registrations and mail config for not much gain, the calls are hard to make for several people either cause of timezone or personal constraints so github stays the best way to unify the async communication as of today IMHO.
Also note that Mark is right too, if you want to leverage the build support of quarkus you need to include in your app the jar target/quarkus-app/quarkus/generated-bytecode.jar
(think you build a lib for ex and want to provide the build "boost" since that's your point) - otherwise you just get CDI code without any build support and need to redo the build discovery and optimization from scratch which at some point (when the app is big enough) kind of imply you don't always test that - as we don't always run native-image
because it is too slow so stay a very specific solution.
Now, assuming you do that, how is this code portable:
import io.quarkus.arc.Arc;
import io.quarkus.arc.ArcContainer;
import io.quarkus.arc.ClientProxy;
import io.quarkus.arc.InjectableBean;
import io.quarkus.arc.InjectableContext;
import io.quarkus.arc.impl.ClientProxies;
public class GreetingResource_ClientProxy extends GreetingResource implements ClientProxy {
...
So either you use build time API as a pure native-image
solution and then you finish in the vendor specific solution which is not needed at spec level as demonstrated multiple times since 2 years or you get an API which does not reach its goal at all.
In both case the API failed.
Add the other reasons of this ticket and we can safely drop the package.
There's exactly nothing in the specification that requires modifying and rebundling dependencies. Zero. Zip. Zilch. Nada. I'm not familiar enough with ODI to speak about that, but ArC doesn't do that and doesn't need to do that.
Now let's assume you have a LGPL licensed dependency which has some package scoped classes which need to be intercepted. Let's also assume that we are talking about Java17 with Java Module System in place. How is that solved in your environment?
CDI lite has a lot of pitfalls in the way it is designed and consumers/end users can't rely on any part of this specification subpart so let's make it optional for now for any implementation.
Pitfalls to solve before thinking about making it a profile or anything concrete in the implementation:
build
but "build" means nothing for JakartaEE which is 99% about "runtime" in terms of API and features (only exception is JPA which references an annotation processor but explicitly excludes the build part of the spec intentionally in its wording since there is no ant/gradle/maven/javac coverage - not a goal - of the spec) so as of today it is a dead package you can only use relying on a vendor so no point to have it under jakartaee umbrellaExtension
API and technically you can runExtension
at build time since years, the only limitation would beExtension
with runtime impacts so it can be as efficient to just mark extensions with@BuildFriendly
- which can even be vendor specific since not loadable annotations are ignored by the JVM.Extension
)So before this API is adopted by people, let's just do a fix release saying it is optional and make it a specific profile for 4.1.0 (or whatever next version is cdi).