Closed FroMage closed 5 years ago
This could be required for the first release, but given that we don't have support for propagation for more than just RESTEasy, and that MP-Concurrency is not released yet (we could fork) and that it would require a merge in RESTEasy anyway, it might be really hard to get this done in time for the first release. We can just hope nobody uses the reactive API in the first release ;)
Ping @cescoffier for his opinion of Grand Reactive Guru.
I would say: not required for the first release, but soon after that (March-April). Do you think it would be a possible target?
About forking MP-Concurrency, I'm mostly doing that for reactive messaging.
Yeah, that's my thought as well, I agree.
But that means we start looking at it ;)
About resteasy, we can look at the converters at the same time (thinking about using them in reactive messaging BTW).
@manovotn I've pushed an initial impl at https://github.com/FroMage/quarkus/tree/smallrye-context-propagation but we will need a special module for Arc context propagation, and possibly a new one for Jta too, because ATM it depends on Weld.
There's two things not working ATM:
Do you think you could take a look please?
The JTA bits are test only, we can maybe move the tests to /tests/
module and cheat it that way? Otherwise it uses simple CDI API things like CDI.current()
which are supported by Arc as well.
Other than that, I will take over the Arc context propagation bit, starting from your branch. Should find some time for on Tue/Wed! :-)
Oh, I did not know that Arc supported CDI.current
, then you're right, we could move the Weld deps to the test
scope, no?
It has test scope even now - https://github.com/smallrye/smallrye-context-propagation/blob/master/jta/pom.xml#L41
What I meant was moving those tests to smallrye-context-propagation/tests
module, since then you are left with only CDI dependency and should be able to reuse the module. At least I think so.
@FroMage I implemented the Arc context propagation here (two commits) - https://github.com/manovotn/quarkus/tree/contextPropagationArc
It's on top of your branch and passes the ContextUnitTest
you added for it. I've also updated the test with small correction and another test case for cleared CDI context.
What else do we need before we make a PR out of it?
OK let met pull them into my branch and rebase and see where we stand.
Well, how do we do without the CDI extension? Is it key? Can we rewrite it for ArC?
I am not 100% sure but I think it should be possible. I know you can add beans via extensions, but we also need to read injection point which I am not sure is possible (@mkouba do you think this little extension can be Arc-ed?). I wouldn't consider it a blocker for the scope of integration in this issue though.
Hehe, little extension? Well, you can inspect all injection points and register custom beans. So I believe it should be possible.
@manovotn so it looks like I have to do at least three things:
And it turns out that https://github.com/stuartwdouglas/quarkus-old/commit/04d3d8acf33c74444a1408badef17e22b4118a49 does not work because exchange
is null, in DeploymentManagerImpl
:
}).call(null, null);
Huh, it's called twice with a null
exchange, but those are not real requests. I can just bail out of those calls.
@manovotn or @mkouba how can I pass a servlet context from UndertowDeploymentTemplate
to TransactionalInterceptorBase
? It looks like I need to stuff it in InvocationContext
but I'm really not sure how.
@manovotn so it looks like I have to do at least three things:
* merge [04d3d8a](https://github.com/quarkusio/quarkus/commit/04d3d8acf33c74444a1408badef17e22b4118a49) to extend the request context life for ArC in case of async requests (not sure why it works ATM) * change our transactional support, this one is much tougher because it relies on CDI interceptors, which don't really have a notion of async
So what we currently have in SR for transactions isn't good enough? I thought Quarkus uses Narayana as well and we had some example with async there (the fullstack test I think?).
* integrate the quarkus thread pool into the smallrye cp managed executor
So we want to leverage one big pool from quarkus and allow to build all ManagedExecutor
instances from that? If so then it's going to be quite difficult to manage several ME instance and their limitations on top of this one pool. I thought we are more in favor of ThreadContext
approach here.
@manovotn or @mkouba how can I pass a servlet context from
UndertowDeploymentTemplate
toTransactionalInterceptorBase
? It looks like I need to stuff it inInvocationContext
but I'm really not sure how.
If you mean javax.interceptor.InvocationContext
, then you can use the data there (getContextData()
) to pass some key/value throughout the interceptor invocation chain.
So what we currently have in SR for transactions isn't good enough? I thought Quarkus uses Narayana as well and we had some example with async there (the fullstack test I think?).
Well, propagation is one thing, but we also have to make sure the server part that commits/rollbacks transactions is done at the end of the async request, and not earlier. So that's outside the scope of the thread provider, but still has to be done.
So we want to leverage one big pool from quarkus and allow to build all ManagedExecutor instances from that? If so then it's going to be quite difficult to manage several ME instance and their limitations on top of this one pool. I thought we are more in favor of ThreadContext approach here.
Why would it be harder? They just use another executor service provided by quarkus. Makes it safer to use.
If you mean javax.interceptor.InvocationContext, then you can use the data there (getContextData()) to pass some key/value throughout the interceptor invocation chain.
Yeah, but I want to pass the value from the UndertowDeploymentTemplate
and I've no access to the interceptor context there. Only the request context.
Ah, I succeeded with ServletRequestContext.current()
but it means quarkus-narayana-jta
has to depend on quarkus-undertow
.
it means quarkus-narayana-jta has to depend on quarkus-undertow
-1 :-(
Well, either that, or we make it an optional dependency, or we make an abstraction that can tell us if there's a request that turned async and how to get notified when it's done, OR WORSE we add support for async CDI interceptors ;)
Unless you have an other idea?
@manovotn so about the managed executors, what I want to be able to do is to make sure that users don't go around creating executor services and thread pools when quarkus has a thread pool for that.
For context for those not familiar with MP-CP: you can build ManagedExecutor
instances which allow you to get CompletionStage
instances that have context propagation and run on specific (non JDK default) thread executors when you call any CS.*Async()
method. ATM, SmallRye-CP will always create a new FixedThreadPoolExecutor
for each ManagedExecutor
, but in Quarkus I think we want to use the Quarkus thread pool as much as possible.
Now, I can do it two ways:
@Inject ManagedExecutor
(especially if we don't have the CDI extension, but I wonder how it would work with the extension which ATM does the building/name-registering), but users would still be able to build managed executors using the API which would not use the Quarkus thread pool.WDYT? And I guess I probably need to ask @vietj @dmlloyd and @stuartwdouglas if they have any input about this question.
There could be good reasons to allow users to make separated thread pools. Solution 2 seems better in that regard, though I don't know if we want to support doing it programmatically (as opposed to having e.g. named thread pools configured in the configuration the same way the built-in pool is configured). Either way though, we would want all managed executors to use our thread pool implementation class if possible.
@FroMage for the extension from SR CP I talked to Martin and this is nor currently possible. I created https://github.com/quarkusio/quarkus/issues/2243 to allow Arc to do that. Once that's in place, it should be pretty easy to achieve.
As for the executors, I understand what you want to do and why, I just don't see how you are doing to control, say, 5 different ManagedExecutor
instances which all realistically run on the same underlying Quarkus pool. Since they all allow to impose certain restrictions such as queues and maxAsync.
* Extend SmallRye-CP so that if a certain (new) service provider is found, we delegate all managed executor building to it, and Quarkus would return managed executors that use the same quarkus thread pool and cannot be shut down. But then it becomes impossible to create new thread pools via this mechanism (which I think is a good thing).
Not being able to shut down the executor is against spec BTW. I also don't understand what do you mean by "impossible to create new thread pools via this mechanism"? If you use the builder pattern to create new one, what happens? You are supposed to be able to have several different executors each of which has varying settings as maxAsync and maxQueued.
* Just declare CDI producers in Quarkus that produce a managed executor that uses the quarkus thread pool, which would be injected via the normal `@Inject ManagedExecutor` (especially if we don't have the CDI extension, but I wonder how it would work with the extension which ATM does the building/name-registering), but users would still be able to build managed executors using the API which would not use the Quarkus thread pool.
-1, this solves next to nothing and you are basically re-doing what the extension does. It would also block us from bringing in that extension (it would clash in many cases). Once we bring the extension, it can build executors from Quarkus thread pool just like the builder will (however that will be).
So it sounds like MP-CP supports wrapping the same pool with multiple different executors? I guess I missed that. :)
Would that be in addition to the ability to define separate underlying pools?
So it sounds like MP-CP supports wrapping the same pool with multiple different executors? I guess I missed that. :)
Yeah, and the IBM implementors told us that's what they're doing. Which makes a lot of sense IMO.
As for the executors, I understand what you want to do and why, I just don't see how you are doing to control, say, 5 different ManagedExecutor instances which all realistically run on the same underlying Quarkus pool. Since they all allow to impose certain restrictions such as queues and maxAsync.
I guess I'll find a "view" executor service somewhere on stack overflow ;) It's bound to exist. Or we make a single injectable ManagedExecutor
that uses the Quarkus thread pool and does not care about min/max settings (it can't be configured itself), and then any users of the API who create ManagedExecutors will get new thread pools.
Not being able to shut down the executor is against spec BTW.
Pretty sure it's not, for pre-defined container-managed executors. IBM said they were doing exactly this and it makes a lot of sense.
So it sounds like MP-CP supports wrapping the same pool with multiple different executors? I guess I missed that. :)
Yeah, and the IBM implementors told us that's what they're doing. Which makes a lot of sense IMO.
So maybe this is a dumb question, and if so I apologize, but: what happens when you shut down an executor which is wrapping a pool that is shared? Surely it's allowed to just shut down the wrapping executor?
I guess I'll find a "view" executor service somewhere on stack overflow ;) It's bound to exist.
I think I already have some code in jboss-threads
that might be reusable for this purpose. If not though, it's not really difficult to implement.
what happens when you shut down an executor which is wrapping a pool that is shared? Surely it's allowed to just shut down the wrapping executor?
I'll go ahead and claim that you can't shut down a container-managed executor. It should either silently pretend to be shut down and do nothing, or throw.
what happens when you shut down an executor which is wrapping a pool that is shared? Surely it's allowed to just shut down the wrapping executor?
I'll go ahead and claim that you can't shut down a container-managed executor. It should either silently pretend to be shut down and do nothing, or throw.
What I meant is that you need to pretend you did shut it down, basically keep a boolean around and forbid any further use once user called shutdown. There are some TCKs for this I think. Obviously, you are not going to kill whole Quarkus tread pool on a whim ;-)
As for the executors, I understand what you want to do and why, I just don't see how you are doing to control, say, 5 different ManagedExecutor instances which all realistically run on the same underlying Quarkus pool. Since they all allow to impose certain restrictions such as queues and maxAsync.
I guess I'll find a "view" executor service somewhere on stack overflow ;) It's bound to exist. Or we make a single injectable
ManagedExecutor
that uses the Quarkus thread pool and does not care about min/max settings (it can't be configured itself), and then any users of the API who create ManagedExecutors will get new thread pools.
Making just one defeats the purpose - you might as well then use ThreadContext
and delegate on Quarkus thread pool directly. Having several differently set executors has its use cases.
The view layer is IMO best approach, so if it ain't that difficult, all the better!
Frankly I don't think min/max make any sense if the underlying thread pool is shared and container-managed. I can't think of any use-case for that.
What I can totally accept is that there may be need for several ManagedExecutor
if Quarkus gains multiple thread pools. Vert.x has a worker pool and an IO pool, for example.
you might as well then use ThreadContext and delegate on Quarkus thread pool directly
Well, if you really want to call async
methods (and frankly in tests I do a lot, though I would never do that in production code) then it's much easier to use the ManagedExecutor
API because you don't need to pass it around at every method.
What I meant is that you need to pretend you did shut it down, basically keep a boolean around and forbid any further use once user called shutdown. There are some TCKs for this I think. Obviously, you are not going to kill whole Quarkus tread pool on a whim ;-)
Well I mean if we do support "view" executors with their own tasks, limits, etc. then it makes sense to be able to shut down the "view", including things like returning submitted-but-not-run tasks from shutdownNow
etc. I think it's probably going to be somewhat more complex than just a boolean though.
Or we make a single injectable
ManagedExecutor
that uses the Quarkus thread pool and does not care about min/max settings (it can't be configured itself), and then any users of the API who create ManagedExecutors will get new thread pools.Making just one defeats the purpose - you might as well then use
ThreadContext
and delegate on Quarkus thread pool directly. Having several differently set executors has its use cases.The view layer is IMO best approach, so if it ain't that difficult, all the better!
I think both should be supported. A view layer works well for many cases, but separate pools have advantages as well:
So I think it's smart to be able to support both.
I think I already have some code in jboss-threads that might be reusable for this purpose. If not though, it's not really difficult to implement.
Where? @dmlloyd ? Not immediately locatable in https://github.com/jbossas/jboss-threads/tree/master/src/main/java/org/jboss/threads
Ah, I succeeded with ServletRequestContext.current() but it means quarkus-narayana-jta has to depend on quarkus-undertow.
So the funny thing is that the servlet async listener gets notified of completion/error but its notion of error is different to the notion of user error: if RESTEasy maps the exception to a Response then it will not throw to the container, and so the servlet async listener sees completion but no error. That's not acceptable in our case, because here it would mean the transaction interceptor does not see the exceptions.
I think I already have some code in jboss-threads that might be reusable for this purpose. If not though, it's not really difficult to implement.
Where? @dmlloyd ? Not immediately locatable in https://github.com/jbossas/jboss-threads/tree/master/src/main/java/org/jboss/threads
I was thinking of https://github.com/jbossas/jboss-threads/blob/2.x/src/main/java/org/jboss/threads/OrderedExecutor.java which was deleted in the current master. But upon reread it looks like this doesn't do what I remember it doing (it's focused on sequential execution rather than being an actual "view").
OK thanks, pity.
If you guys want me to, I can look into implementing a view executor. I have a pretty clear idea of what is involved, at any rate.
I'd love if you could, because you would have to review my bug-ridden code and fix it anyway and it would take longer ;)
Okay :)
Should I just add it to jboss-threads
or is there a place you prefer it to be located?
Dunno, should I use it or copy it? Is jboss-threads already a dependency of quarkus-core?
Right now, RESTEasy relies on context propagation with reactivecontexts, which supports only RESTEasy. MP-Context-Propagation should supercede this, but has not been released yet.
There's an implementation in SmallRye-Context-Propagation which already supports CDI context propagation (via Weld).
We should look into supporting Arc and transaction (and security when it lands) context propagation, especially since we are going to push reactive programming, so people will notice that they can't do CDI, transactions, etc inside their reactive pipeline.