Closed FelixRodriguezJara closed 4 years ago
The project's preference is to avoid implementations that depend on proprietary technologies like Google Cloud. Instead, we developed the gRPC-based storage plugin framework that can be used to implement other types of backends.
However, the storage plugin currently only supports the actual storage API. You could still use it to implement support for PubSub on the producer side (i.e. collector), but the ingester is more tightly coupled with Kafka today. Ideally, we could have a similar abstraction in the ingester that would allow implementing different data streaming sources via harshicorp-plugin.
Ideally, we could have a similar abstraction in the ingester that would allow implementing different data streaming sources via harshicorp-plugin.
Just to brainstorm: what if we just provide a minimal set of tools for people to build their own "collector-like" components? The current ingester could be refactored to make use of this new framework.
Technically, we already have that, since this is how Uber builds internal binaries. However, at Uber we're willing to keep up with breaking changes in the component APIs. To support such mode officially we'd need to be much more restrictive with API changes, which will hamper velocity. This is why I prefer the plugin approach as it does provide a stable API surface, which is much more narrow than the internal APIs of the components.
Fwiw Zipkin used to support this build mode that they called "custom servers", and they phased it out due to support burden.
The collector technically does this already. With a gRPC storage back end, one can build a sink and write spans to wherever via the https://github.com/jaegertracing/jaeger/blob/master/plugin/storage/grpc/shared/interface.go#L45.
Where I can see the problem with how it is done currently, is that when one implements a storage plugin, one has to implement reader, writer and dependency reader. I think splitting these three things into separate plugins would make it possible to easily build a collector plugin.
Additionally, the gRPC give an option to enable soft multi-tenancy.
Edit: Adding plugin versioning and plugin registry, similar to how Terraform providers are done, is not much work. I'd be happy to help with something like this.
Interesting, didn't know that Zipkin had that in the past. You are probably right about the support burden, but I do see this framework as something very restricted. We'd only provide one method, to register a SpanProcessor
, and perhaps some plumbing with stuff similar to what we have in our main.go
files. It's perhaps worth the experiment?
The collector technically does this already. With a gRPC storage back end, one can build a sink and write spans to wherever via the https://github.com/jaegertracing/jaeger/blob/master/plugin/storage/grpc/shared/interface.go#L45.
Yeah, I was trying to avoid requiring folks to dig into Jaeger's source code to understand how things fit together. But I like your idea about splitting the concerns into separate plugins.
@jpkrohling is it okay to reach via email?
Gitter is preferable, especially the open community channel :)
I am closing this as there are no plans to support GCP pub/sub in Jaeger.
The future Jaeger version will be based on OpenTelemetry that has a pluggable architecture, so the custom storage could be implemented in https://github.com/open-telemetry/opentelemetry-collector-contrib as receivers and exporters.
Requirement - what kind of business use case are you trying to solve?
Implement tracing with Jaeger using Pub/Sub instead of Kafka on Google Cloud Platform.
Problem - what in Jaeger blocks you from solving the requirement?
Collector, Ingester and Spark modules need to be modified in order to produce and consume messages from Pub/Sub instead of kafka when using the "streaming" mode.
Proposal - what do you suggest to solve the problem or improve the existing situation?
We would like to extend Collector, Ingester and Spark in order to use Pub/Sub. We have thought of creating a new plugin/storage to point to Pub/Sub, re-implementing read and write interfaces for these modules.
Any open questions to address
We have identified the following changes that need to be done in order to implement Pub/Sub streaming with Jaeger:
Create a new plugin for Pub/Sub under: https://github.com/jaegertracing/jaeger/tree/master/plugin/storage/pubsub
Kafka can be used as a reference implementation.
On the Collector Side of things:
Implement writer for Pub/Sub, similar to what Jaeger uses for kafka. https://github.com/jaegertracing/jaeger/blob/master/plugin/storage/kafka/writer.go
Consumer defined to work with Kafka on the Ingester side:
https://github.com/jaegertracing/jaeger/blob/master/cmd/ingester/app/consumer/consumer.go
We would have to re-write it to work with Pub/Sub:
https://cloud.google.com/pubsub/docs/subscriber
Modify main.go for the consumer application.
Please confirm if these are the right steps or if we are missing something. We have other unkowns around: Spark dependencies, who do they handle reading and storage? Jaeger operator, how to adapt it to work with Pub/Sub (new option, create Pub/Sub instance?).
Thank you very much.