Open kadhamecha-conga opened 1 year ago
This issue was marked stale. It will be closed in 30 days without additional activity.
@kadhamecha-conga It seems that the problem is related to your GRPC endpoint where the collector is exported.
@serkan-ozal grpc endpoint is
otlp/1: endpoint: signals-grpc.demo.congacloud.app:443 tls: insecure: true
can you please help ?
@kadhamecha-conga
You use the port 443 but disable secure communication with insecure: true
parameter. Can you try by removing insecure
parameter?
hi team,
we are facing issue of traces drop at lambda execution, where traces dropped from lambda function.
error from cloudwatch:
{ "level": "error", "ts": 1692968962.3226757, "caller": "exporterhelper/queued_retry.go:296", "msg": "Exporting failed. Dropping data. Try enabling sending_queue to survive temporary failures.", "kind": "exporter", "data_type": "traces", "name": "otlp", "dropped_items": 4, "stacktrace": "go.opentelemetry.io/collector/exporter/exporterhelper.(*queuedRetrySender).send\n\tgo.opentelemetry.io/collector@v0.68.0/exporter/exporterhelper/queued_retry.go:296\ngo.opentelemetry.io/collector/exporter/exporterhelper.NewTracesExporter.func2\n\tgo.opentelemetry.io/collector@v0.68.0/exporter/exporterhelper/traces.go:116\ngo.opentelemetry.io/collector/consumer.ConsumeTracesFunc.ConsumeTraces\n\tgo.opentelemetry.io/collector/consumer@v0.68.0/traces.go:36\ngo.opentelemetry.io/collector/service/internal/fanoutconsumer.(*tracesConsumer).ConsumeTraces\n\tgo.opentelemetry.io/collector@v0.68.0/service/internal/fanoutconsumer/traces.go:77\ngo.opentelemetry.io/collector/receiver/otlpreceiver/internal/trace.(*Receiver).Export\n\tgo.opentelemetry.io/collector/receiver/otlpreceiver@v0.68.0/internal/trace/otlp.go:54\ngo.opentelemetry.io/collector/pdata/ptrace/ptraceotlp.rawTracesServer.Export\n\tgo.opentelemetry.io/collector/pdata@v1.0.0-rc2/ptrace/ptraceotlp/grpc.go:72\ngo.opentelemetry.io/collector/pdata/internal/data/protogen/collector/trace/v1._TraceService_Export_Handler.func1\n\tgo.opentelemetry.io/collector/pdata@v1.0.0-rc2/internal/data/protogen/collector/trace/v1/trace_service.pb.go:310\ngo.opentelemetry.io/collector/config/configgrpc.enhanceWithClientInformation.func1\n\tgo.opentelemetry.io/collector@v0.68.0/config/configgrpc/configgrpc.go:410\ngoogle.golang.org/grpc.chainUnaryInterceptors.func1.1\n\tgoogle.golang.org/grpc@v1.51.0/server.go:1162\ngo.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc.UnaryServerInterceptor.func1\n\tgo.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc@v0.37.0/interceptor.go:349\ngoogle.golang.org/grpc.chainUnaryInterceptors.func1.1\n\tgoogle.golang.org/grpc@v1.51.0/server.go:1165\ngoogle.golang.org/grpc.chainUnaryInterceptors.func1\n\tgoogle.golang.org/grpc@v1.51.0/server.go:1167\ngo.opentelemetry.io/collector/pdata/internal/data/protogen/collector/trace/v1._TraceService_Export_Handler\n\tgo.opentelemetry.io/collector/pdata@v1.0.0-rc2/internal/data/protogen/collector/trace/v1/trace_service.pb.go:312\ngoogle.golang.org/grpc.(*Server).processUnaryRPC\n\tgoogle.golang.org/grpc@v1.51.0/server.go:1340\ngoogle.golang.org/grpc.(*Server).handleStream\n\tgoogle.golang.org/grpc@v1.51.0/server.go:1713\ngoogle.golang.org/grpc.(*Server).serveStreams.func1.2\n\tgoogle.golang.org/grpc@v1.51.0/server.go:965" }
due to droppage we see issues in traces.
code details: "OpenTelemetry.Instrumentation.AWSLambda" Version="1.1.0-beta.2" />
otel config: ` receivers: otlp: protocols: grpc: http:
exporters: logging: loglevel: debug otlp: endpoint: "grpc endpoint" retry_on_failure: initial_interval: 1s max_interval: 5s sending_queue: queue_size: 2000 timeout: 5s
enables output for traces to xray
service: pipelines: traces: receivers: [otlp] exporters: [logging, otlp] `
let me know, if u need more info.
thanks.