logstash-plugins / logstash-codec-protobuf

Codec plugin for parsing Protobuf messages
Apache License 2.0
26 stars 16 forks source link

Troubles when trying to connect OpenTelemetry Collector OTLP/HTTP protobuf to Logstash #70

Open aubm opened 1 year ago

aubm commented 1 year ago

Logstash information:

Please include the following information:

  1. Logstash version (e.g. bin/logstash --version)
  2. Logstash installation source (e.g. built from source, with a package manager: DEB/RPM, expanded from tar or zip archive, docker)
  3. How is Logstash being run (e.g. as a service/service manager: systemd, upstart, etc. Via command line, docker/kubernetes)
  4. How was the Logstash Plugin installed

I'm using Logstash version 8.5.1. I run custom Docker image in which I installed the plugin and copied the ruby classes required to decode the request protobuf payload.

Here is the content of the Dockerfile.

FROM docker.elastic.co/logstash/logstash:8.5.1

RUN /usr/share/logstash/bin/logstash-plugin install logstash-codec-protobuf
COPY opentelemetry /opt/protobuf/opentelemetry

The ruby files are generated using the tooling in this repository: https://github.com/open-telemetry/opentelemetry-proto A copy of them can be found here: https://github.com/aubm/logstash-protobuf-codec-plugin-opentelemetry-collector-bug-attached-files/tree/master/dockerfiles/logstash/opentelemetry/proto

I run Logstash on Minikube using Docker as the container runtime.

Description of the problem including expected versus actual behavior:

I'm trying to send logs collected by the OpenTelemetry Collector to Logstash. I configured OpenTelemetry Collector in a way that it's sending logs over HTTP using Protobuf encoding.

Here is the logstash input configuration.

input {
  http {
    port => 10000
    codec => protobuf {
      class_name => "opentelemetry.proto.collector.logs.v1.ExportLogsServiceRequest"
      class_file => '/opt/protobuf/opentelemetry/proto/collector/logs/v1/logs_service_pb.rb'
      protobuf_root_directory => "/opt/protobuf"
      protobuf_version => 3
    }
  }
}

I have a Kubernetes service exposing port 10000 of the Logstash container, the OpenTelemetry exporter is configured as follows.

exporters:
  otlphttp:
    logs_endpoint: http://logstash-pipelines-inputs.default.svc.cluster.local:10000
    compression: none

I haven't found an easy way to debug the content of the request sent by the OpenTelemetry Collector yet, but I'm pretty confident that the Ruby class I've configured the right one. I trust that by having explored the Go code in the exporter. For what it's worth here, here is where the request payload is encoded. The NewExportRequestFromLogs function is declared here. In the same file, we can see that it returns an ExportRequest , which is a wrapper around otlpcollectorlog.ExportLogsServiceRequest, and the MarshalProto function eventually delegates to *otlpcollectorlog.ExportLogsServiceRequest.Marshal().

All the Kubernetes manifests can be found here under the config folder: https://github.com/aubm/logstash-protobuf-codec-plugin-opentelemetry-collector-bug-attached-files/tree/master

When I run the whole thing, here is what I see in the Logstash console.

[2023-09-26T20:59:34,393][WARN ][logstash.codecs.protobuf ][main][0ab6a3768106bebf116bb6c30480f3ab4507dcce11f495901fff4cef37068b58] Couldn't decode protobuf: #<Google::Protobuf::ParseError: While parsing a protocol message, the input ended unexpectedly in the middle of a field.  This could mean either that the input has been truncated or that an embedded message misreported its own length.>
[2023-09-26T20:59:34,592][WARN ][logstash.codecs.protobuf ][main][0ab6a3768106bebf116bb6c30480f3ab4507dcce11f495901fff4cef37068b58] Couldn't decode protobuf: #<Google::Protobuf::ParseError: While parsing a protocol message, the input ended unexpectedly in the middle of a field.  This could mean either that the input has been truncated or that an embedded message misreported its own length.>
[2023-09-26T20:59:34,793][WARN ][logstash.codecs.protobuf ][main][0ab6a3768106bebf116bb6c30480f3ab4507dcce11f495901fff4cef37068b58] Couldn't decode protobuf: #<Google::Protobuf::ParseError: While parsing a protocol message, the input ended unexpectedly in the middle of a field.  This could mean either that the input has been truncated or that an embedded message misreported its own length.>
[2023-09-26T20:59:34,994][WARN ][logstash.codecs.protobuf ][main][0ab6a3768106bebf116bb6c30480f3ab4507dcce11f495901fff4cef37068b58] Couldn't decode protobuf: #<Google::Protobuf::ParseError: While parsing a protocol message, the input ended unexpectedly in the middle of a field.  This could mean either that the input has been truncated or that an embedded message misreported its own length.>
[2023-09-26T20:59:35,194][WARN ][logstash.codecs.protobuf ][main][0ab6a3768106bebf116bb6c30480f3ab4507dcce11f495901fff4cef37068b58] Couldn't decode protobuf: #<Google::Protobuf::ParseError: While parsing a protocol message, the input ended unexpectedly in the middle of a field.  This could mean either that the input has been truncated or that an embedded message misreported its own length.>
[2023-09-26T20:59:35,392][WARN ][logstash.codecs.protobuf ][main][0ab6a3768106bebf116bb6c30480f3ab4507dcce11f495901fff4cef37068b58] Couldn't decode protobuf: #<Google::Protobuf::ParseError: While parsing a protocol message, the input ended unexpectedly in the middle of a field.  This could mean either that the input has been truncated or that an embedded message misreported its own length.>
[2023-09-26T20:59:35,592][WARN ][logstash.codecs.protobuf ][main][0ab6a3768106bebf116bb6c30480f3ab4507dcce11f495901fff4cef37068b58] Couldn't decode protobuf: #<Google::Protobuf::ParseError: While parsing a protocol message, the input ended unexpectedly in the middle of a field.  This could mean either that the input has been truncated or that an embedded message misreported its own length.>

A bunch of error messages that suggest that the wrong ruby class is used to decode the request payload. However, as I explained, I pretty confident that I've configured the right one.

Looking in the other issues on this repository, I could find other mentions of the plugin not always working properly with the http input, namely this one: https://github.com/logstash-plugins/logstash-codec-protobuf/issues/34 And I came to wonder if there is indeed a bug somewhere.

Steps to reproduce:

Assuming you have minikube and docker installed on your machine. Here are the steps to reproduce.

  1. Clone the repository that contains all the files to reproduce
git clone https://github.com/aubm/logstash-protobuf-codec-plugin-opentelemetry-collector-bug-attached-files.git
  1. Build the docker images and load them on Minikube
docker build -t mock-http-output:v0 dockerfiles/mock-http-output
docker build -t logstash:v0 dockerfiles/logstash

minikube image load mock-http-output:v0
minikube image load logstash:v0
  1. Run the containers on Minikube
kubectl config use-context minikube
kubectl apply -f config/logstash
kubectl apply -f config/config.yaml
  1. Wait a few seconds for Logstash to be ready and inspect the logs
kubectl logs -f logstash-0
m4rkw commented 11 months ago

Seeing the same issue. Tried both tcp and http inputs to logstash, cannot get it to work no matter what I do.

Using logstash 8.6.1 public docker image with the OpenSearch plugin and release 1.3.0 of this protobuf plugin.

Logstash config is:

input {
  tcp {
    id => "tcp_protobuf_tls_input
    port => 8081
    codec => protobuf {
      class_name => "opentelemetry.proto.logs.v1.LogsData"
      include_path => ["/etc/logstash/opentelemetry/logs_pb.rb"]
      protobuf_root_directory => "/etc/logstash/opentelemetry"
      protobuf_version => 3
    }
    ssl_enable => true
    ssl_cert => "/path/to/cert.crt"
    ssl_key => "/path/to/cert.key"
    ssl_key_passphrase => "blah"
    ssl_verify => false
  }
}
$ protoc --version
libprotoc 3.12.4

otel-collector-config.yaml

receivers:
  filelog:
    include:
      - /tmp/in.log
exporters:
  logging:
    verbosity: detailed
  otlp:
    endpoint: logstashhostname:8801
    compression: none
    tls:
      insecure: false
      ca_file: /path/to/ca.crt
service:
  pipelines:
    logs:
      receivers: [filelog]
      processors: []
      exporters: [logging, otlp]

Result:

$ echo test >> /tmp/in.log

[2023-10-26T12:33:29,009][WARN ][l.c.protobuf                 ] Couldn't decode protobuf: #<Google::Protobuf::ParseError: While parsing a protocol message, the input ended unexpectedly in the middle of a field. This could mean either that the input has been truncated or that an embedded message misreported its own length.
[2023-10-26T12:33:29,009][WARN ][l.c.protobuf                 ] Couldn't decode protobuf: #<Google::Protobuf::ParseError: Protocol message contained an invalid tag (zero).

Also tried using http input, in that case I get:

Couldn't decode protobuf: #<Google::Protobuf::ParseError: Protocol message tag had invalid wire type.

If I configure the OTEL collector to write to kafka and then read the data in kafka I'm able to decode it using the otel protobuf model. I added some code to the logstash codec plugin to write the message its trying to decode to a file and then sent the same message to both kafka and logstash.

The message read back from Kafka was:

b'\n8\n\x00\x124\n\x00\x120*\x06\n\x04test2\x19\n\rlog.file.name\x12\x08\n\x06in.logJ\x00R\x00YsB\x82@\x1a\x9f\x91\x17

whereas the message dumped from the logstash plugin was:

b'\n8\n\x00\x124\n\x00\x120*\x06\n\x04test2\x19\n\rlog.file.name\x12\x08\n\x06in.logJ\x00R\x00YsB\xef\xbf\xbd@\x1a\xef\xbf\xbd\xef\xbf\xbd\x17\n

The first blog decodes successfully with the protobuf model using python, the second does not and produces the same error message that I get from logstash when it fails to decode the message, so it seems like it may be getting mangled somehow by the input plugin.