Open petercheon opened 1 year ago
See connected issue in Sleuth: https://github.com/spring-cloud/spring-cloud-sleuth/issues/2297 To me this seems like a new feature instrumenting the Fork Join API (or find a way to inject an executor/completion service).
The classes from the tracing domain make it a bit hard for me to understand the suggestion. Consider providing an example with only classes from the JDK and the context-propagation library so that it is more comprehensible.
I'm skeptical we can do anything for the ForkJoinPool.commonPool()
, which is implicitly used by the Stream
API and there is no option to configure a different implementation. I suppose manual ContextSnapshot
capturing and restoration is what's available at the moment.
If you have some suggestions, please update the issue.
If you would like us to look at this issue, please provide the requested information. If the information is not provided within the next 7 days this issue will be closed.
Any news on that? Is is possible to do that manually at least?
Have you tried using ContextExecutorService.wrap(ForkJoinPool.commonPool())
?
Not really. Should I do it everytime before calling a parallelStream or just once on app startup? Thanks for the support.
The problem is that this will return the ExecutorService
interface that will simply wrap the common pool. I don't think you'll be happy with the result cause you want to use the ForkJoinPool
API :grimacing:
I would like to know how to propagate the TraceId within the Java Stream API in Spring Cloud Sleuth.
In the code below, the trace_id is not propagated, and each client ends up having an individual TraceId.
However, if we modify the above code as shown below, the TraceId is propagated.
Unfortunately, applying SpanInScope to all existing Stream API code as mentioned above has its limitations. Is there a way to inject SpanInScope with minimal modifications to the existing Stream API code?
Thank you.