vectordotdev / vector

A high-performance observability data pipeline.
https://vector.dev
Mozilla Public License 2.0
17.42k stars 1.51k forks source link

New `grpc` source #573

Open derekperkins opened 5 years ago

derekperkins commented 5 years ago

We currently use the Stackdriver Go client compiled into our applications because I hate the extra stdout/stderr parsing overhead. I already have a structured log in hand, so why would I serialize it to json, print it to stdout and have it parsed back in with significant chances for parsing errors. I'd prefer to be able to ship the structured log directly to Vector and then have it directed to the appropriate sinks.

derekperkins commented 5 years ago

As some prior art, here is what the Stackdriver proto looks like. https://github.com/googleapis/googleapis/blob/master/google/logging/v2/logging.proto

LucioFranco commented 5 years ago

We currently already have protobuf definitions for our internal event type https://github.com/timberio/vector/blob/master/proto/event.proto. Looks like we could easily expose a grpc service using https://github.com/tower-rs/tower-grpc that uses are internal protobuf structure.

derekperkins commented 5 years ago

That would be great!

binarylogic commented 5 years ago

This is basically complete, but it raised a number of design questions that we need to answer before we proceed to merge this. See https://github.com/timberio/vector/pull/617#pullrequestreview-264327501.

Once we obtain consensus on this we can reopen that PR, adjust, and then merge.

fcantournet commented 4 years ago

This would be very interesting as a sink for envoy access logs too, to replace the file based approach (which entails dealing with log-rotation and mounting a hostpath in the container etc...)

binarylogic commented 4 years ago

I've been thinking about this more, especially as we start to ramp up our marketing efforts around application monitoring and observability. I'm going to respond to this comment.

What does the full user experience here look like? Will we be generating and distributing clients that speak this protocol or is that something users should deal with themselves?

No, to start, clients will deal with this their self. We have discussed repurposing the Timber libraries as Vector in-app extensions but that is not a near term change.

Do we really want to push the details of our internal data representation up into user's applications? Given the multitude of existing formats and protocols for shipping this kind of data around, do we really want to commit to designing, maintaining, and evolving our own?

This is where I think we deviate from the initial implementation. I do not think we should expose our exact internal data model, we should, instead, expose a raw event protobuf (the highest form of our data model). The act of deriving metrics, traces, exceptions, etc from events should be done within Vector itself.

This fits nicely into some of our upcoming marketing efforts around application monitoring and observability. Ideally, applications would emit raw events that describe their internal state and that's it. Deriving specific observability data types from this data should happen external to the application. If an application emits other data types they can use the source designed for that data type. For example, if the application exposes metrics data through a prometheus endpoint then Vector can scrape that with the prometheus source.

Should we really have this protocol and the protobuf/tcp "vector" protocol for which we've already built sources and sinks?

Yes. The vector sources/sinks are meant to be opaque to the user. They are not designed for any other external clients, this source is.

For example, could we implement an SSF source instead? Maybe something from OpenMetrics or OpenTelemetry? Or is this how we should have built the vector source/sink in the first place and this should replace it?

I actually think these are too opinionated for this specific source. Raw events are the true normalized form of all of these data types. If we did decide to support SSF or OpenTelemetry we should support them explicitly through specific sources or decoding transforms.

Final thoughts

The reason I've been thinking about this is because I find it superior to the traditiaonl app -> disk (file) -> agent design. This allows the application to talk directly to Vector. The use of the disk as a buffer is a design decision the user can make in the form of how they configure Vector's buffers.


@lukesteensen let me know what you think.

lukesteensen commented 4 years ago

I do not think we should expose our exact internal data model, we should, instead, expose a raw event protobuf (the highest form of our data model). The act of deriving metrics, traces, exceptions, etc from events should be done within Vector itself.

This makes sense to me. I'm curious what kind of schema you have in mind for these events. I assume it would be a pretty loose key/value type of structure with various types of values available. Would we support higher level things like metrics, or just simple scalar values?

Yes. The vector sources/sinks are meant to be opaque to the user. They are not designed for any other external clients, this source is.

I wonder if it would make sense to rename them in that case (e.g. native, internal, forward).

This allows the application to talk directly to Vector.

This is always the sticky bit in my mind. Direct, "smart" logging to some probably-network endpoint puts a pretty large burden on the logging library to handle things like load shedding, never blocking, safe buffering, etc, etc. Going to stdout or local disk is pretty well understood and supported in all languages, so I'd just want to be careful that whatever we recommend is similarly unlikely to cause an application outage for users.

binarylogic commented 4 years ago

Going to stdout or local disk is pretty well understood and supported

Agree, I've just been seeing a lot of advice to avoid the disk which got me thinking about direct app -> vector communication, especially since Vector has configurable disk buffers. I still agree stdout is the best method. It's simple, decoupled, 12-factor, etc.

LucioFranco commented 4 years ago

Stackdriver seems to do some funky any magic for their format, which we could adapt to not have to expose our proto internally but not sure how worth it is then to have a grpc sink. https://github.com/googleapis/googleapis/blob/master/google/logging/v2/log_entry.proto#L72

ckdarby commented 2 years ago

@binarylogic Did having the grpc input fizzle out?

jszwedko commented 2 years ago

@ckdarby The vector source is actually a gRPC source now. We haven't publicly documented the interface, but it should be pretty stable since we need to maintain compatibility between each minor Vector version (though may break compatibility over 2+ versions). It's possible we'll just document that and provide additional stability guarantees. You can see the proto definition here: https://github.com/vectordotdev/vector/blob/master/proto/vector.proto (which imports https://github.com/vectordotdev/vector/blob/master/lib/vector-core/proto/event.proto).

ckdarby commented 2 years ago

@jszwedko Does that also mean vector sink is a gRPC sink as well :exploding_head:?

jszwedko commented 2 years ago

@jszwedko Does that also mean vector sink is a gRPC sink as well 🤯?

It is indeed 😄 This is with version = "2" configured on the component. version = "1" is a raw TCP-based protocol.

jszwedko commented 2 years ago

Repurposing this to be an issue for documenting the gRPC interface supported by the vector source and sink, which I believe covers the use-case here.