Closed smallc2009 closed 3 weeks ago
Pinging code owners:
See Adding Labels via Comments if you do not have permissions to add labels yourself.
thx, I will check it ASAP
8.10
elasticsearch exporter not support 8.x
version for now.
elasticsearch exporter not support 8.x version for now.
@JaredTan95 I am looking for issues as a first-time contributor to OTel. Could I take this up and try to add support for ES 8.x? If yes, could you please help me get started. Any pointers will be helpful. Thanks.
@JaredTan95 Could you clarify which versions are supported? It would be good to document this if possible.
This issue has been inactive for 60 days. It will be closed in 60 days if there is no activity. To ping code owners by adding a component label, see Adding Labels via Comments, or if you are unsure of which component this issue relates to, please ping @open-telemetry/collector-contrib-triagers
. If this issue is still relevant, please ping the code owners or leave a comment explaining why it is still relevant. Otherwise, please close it.
Pinging code owners:
See Adding Labels via Comments if you do not have permissions to add labels yourself.
/label waiting-for-author
Not being able to send to Elasticsearch 8.x means this exporter is unusable for many users. I'll happily test it if needed but I don't have time to prepare a proper PR, sorry.
I do not experience any issues with sending to Elastic cloud 8.x using elasticsearch exporter.
Setup:
ocb-config-main.yaml:
dist:
module: github.com/open-telemetry/opentelemetry-collector # the module name for the new distribution, following Go mod conventions. Optional, but recommended.
name: collector # the binary name. Optional.
description: "Custom OpenTelemetry Collector distribution" # a long name for the application. Optional.
otelcol_version: "0.96.0" # the OpenTelemetry Collector version to use as base for the distribution. Optional.
output_path: ./build_main/ # the path to write the output (sources and binary). Optional.
version: "1.0.0" # the version for your custom OpenTelemetry Collector. Optional.
# go: "/usr/bin/go" # which Go binary to use to compile the generated sources. Optional.
# debug_compilation: false # enabling this causes the builder to keep the debug symbols in the resulting binary. Optional.
exporters:
- gomod: "github.com/open-telemetry/opentelemetry-collector-contrib/exporter/elasticsearchexporter v0.96.0" # the Go module for the component. Required.
receivers:
- gomod:
go.opentelemetry.io/collector/receiver/otlpreceiver v0.96.0
command to build otel collector:
ocb --config=ocb-config-main.yaml --name="my-otelcol"
otelcol-main.yaml:
receivers:
otlp:
protocols:
grpc:
endpoint: localhost:4317
http:
endpoint: localhost:4318
exporters:
elasticsearch:
endpoints: [ "https://**redacted**.cloud.es.io" ]
logs_index: foo
api_key: **redacted**
retry:
enabled: true
max_requests: 10000
service:
pipelines:
logs:
receivers: [otlp]
processors: []
exporters: [elasticsearch]
command to run the collector:
./build_main/my-otelcol --config otelcol-main.yaml
In another terminal, run the command to send sample logs:
telemetrygen logs --otlp-endpoint=localhost:4317 --otlp-insecure --logs 100
In kibana, there are 100 logs indexed:
I'm getting errors that it keep dialing 10.46.48.34:18422 and timeout. 2023-12-07T12:42:56.582Z error elasticsearchexporter@v0.90.1/elasticsearch_bulk.go:150 Bulk indexer error: flush: dial tcp 10.46.48.237:18726: i/o timeout {"kind": "exporter", "data_type": "logs", "name": "elasticsearch/log"}
The error in this issue appears to be a connectivity issue rather than a bug in elasticsearch exporter.
elasticsearch exporter not support 8.x version for now.
@JaredTan95 do you mind clarifying what is not supported?
In our tests (to bridge AWS CloudWatch logs to Elastic self-hosted v8) we had to build the collector with go-elasticsearch v8 (basically waiting for this PR to be merged). PS: sorry I don't have the actual error returned but I remember it was related to not being able to send documents to Elastic from go-elasticsearch v7 to our v8 API endpoint
In our tests (to bridge AWS CloudWatch logs to Elastic self-hosted v8) we had to build the collector with go-elasticsearch v8 (basically waiting for this PR to be merged). PS: sorry I don't have the actual error returned but I remember it was related to not being able to send documents to Elastic from go-elasticsearch v7 to our v8 API endpoint
It would be helpful to get the actual error log in your case since I am not able to reproduce the issue.
On a separate note, since supposedly go-elasticsearch v7 works for both v7 and v8, and that upgrading to go-elasticsearch v8 will break support for v7 (see issue), I don't see https://github.com/open-telemetry/opentelemetry-collector-contrib/pull/30262 getting merged soon. However, if you could show some errors in your use case, that could help make a case for adding support for specifically v8 (via a feature flag maybe?)
Error was context deadline exceeded (exporter 0.94), solved by compiling with the go module for v8.
Error was context deadline exceeded (exporter 0.94), solved by compiling with the go module for v8.
Thanks, the screenshot is very helpful.
Here's my hypothesis: as you can see from the stacktrace, there's a timeout_sender.go:49. Apparently, there's a default timeout of 5s for request export, while the exporter code has a default 90s timeout passed into go-elasticsearch bulk indexer. This means that by default, if Elasticsearch takes >5s to respond, the context will reach its deadline before go-elasticsearch http client gives up, hence the "context deadline exceeded" error. It is a combination of bad hardcoded defaults and a slow Elasticsearch.
As to why upgrading to go-elasticsearch v8 solves your issue, it can be either some changes within go-elasticsearch v8, or how go-elasticsearch is used. Do you mind sharing the exact code changes you've made to upgrade to go-elasticsearch v8 from v7, as well as the collector configuration (redact sensitive information please)? It must be more than just a go mod replace since there is code that references v7 explicitly.
It could also be that certain errors are retried in v7 but not v8 such that the retries in v7 use >5s and cause context deadline to be exceeded before bulk indexer can finish flushing.
I doubt our Elasticsearch API takes more than 5s but I lack the data for when the test were conducted, I'll ask my colleague to chip in and provide some more context as soon as possible.
This issue has been inactive for 60 days. It will be closed in 60 days if there is no activity. To ping code owners by adding a component label, see Adding Labels via Comments, or if you are unsure of which component this issue relates to, please ping @open-telemetry/collector-contrib-triagers
. If this issue is still relevant, please ping the code owners or leave a comment explaining why it is still relevant. Otherwise, please close it.
Pinging code owners:
See Adding Labels via Comments if you do not have permissions to add labels yourself.
This issue has been inactive for 60 days. It will be closed in 60 days if there is no activity. To ping code owners by adding a component label, see Adding Labels via Comments, or if you are unsure of which component this issue relates to, please ping @open-telemetry/collector-contrib-triagers
. If this issue is still relevant, please ping the code owners or leave a comment explaining why it is still relevant. Otherwise, please close it.
Pinging code owners:
See Adding Labels via Comments if you do not have permissions to add labels yourself.
This issue has been closed as inactive because it has been stale for 120 days with no activity.
Component(s)
exporter/elasticsearch
What happened?
Description
my environment is hosted on the EKS 1.26.0 cluster and elasticsearch is 8.10 on the elastic cloud. I'm using elasticsearch exporter to send log to the elastic cloud. I'm getting errors that it keep dialing 10.46.48.34:18422 and timeout. I don't know where this IP come from.
below is error message.
Steps to Reproduce
Expected Result
elasticsearchexport can send logs to the elastic cloud.
Actual Result
Collector version
0.89
Environment information
Environment
EKS: 1.26
OpenTelemetry Collector configuration
Log output
Additional context
No response