open-telemetry / opentelemetry-go

OpenTelemetry Go API and SDK
https://opentelemetry.io/docs/languages/go
Apache License 2.0
5.19k stars 1.04k forks source link

Use sync.Pools in otlplog transforms #5196

Open MrAlias opened 5 months ago

MrAlias commented 5 months ago

Could we use sync.Pools to reduce the amount of heap allocation? E.g. ResourceLogs could return a functions that would return the pooled maps and slices back to their pools.

If you think that it is possible and it is a good idea then this should be tracked as a separate issue.

_Originally posted by @pellared in https://github.com/open-telemetry/opentelemetry-go/pull/5191#discussion_r1560549956_

hiroyaonoe commented 5 months ago

I want to work on this! Should I update ScopeLogs as well? https://github.com/open-telemetry/opentelemetry-go/blob/fe3de7059e19a0e88c7e8b342ed345e50df94aa3/exporters/otlp/otlplog/otlploghttp/internal/transform/log.go#L56

MrAlias commented 5 months ago

I want to work on this! Should I update ScopeLogs as well?

https://github.com/open-telemetry/opentelemetry-go/blob/fe3de7059e19a0e88c7e8b342ed345e50df94aa3/exporters/otlp/otlplog/otlploghttp/internal/transform/log.go#L56

I would start by only pooling the maps.

MrAlias commented 2 months ago

This needs to be applied at to the template to address both HTTP and gRPC: https://github.com/open-telemetry/opentelemetry-go/tree/main/internal/shared/otlp/otlplog/transform

kenanfarukcakir commented 1 month ago

Hi @MrAlias, I am working on an application that collects logs from a kubernetes node and we plan to export these logs to OpenTelemetry Collector using go.opentelemetry.io/otel/exporters/otlp/otlplog/otlploghttp. When the log frequency increases, I've observed a dramatic increase in the cpu usage of my application.

Got the cpu profile, and it seems related to this issue.

Screenshot 2024-08-15 at 14 43 34 Screenshot 2024-08-15 at 14 43 52

Seems like too much allocation is made and that keeps the GC under pressure. I'd like to work on this and appreciate any direction.