googleapis / google-cloud-go

Google Cloud Client Libraries for Go.
https://cloud.google.com/go/docs/reference
Apache License 2.0
3.76k stars 1.29k forks source link

spanner: traces created for spanner session ping in opentelemetry sample code #10404

Open nareshz opened 4 months ago

nareshz commented 4 months ago

I was trying to run a Go sample code after setting up OpenTelemetry configuration and found traces generated with google.spanner.v1.Spanner/ExecuteSql rootSpan which were not from my sample code. Found that these are generated from Spanner session ping method which verifies if the session is still alive or not. Ideally these traces shouldn't be visible in Trace explorer. In case HealthCheckInterval defined in ClientConfig is very low then it can lead to a lot of traces generated for Spanner Session ping.

It seems that we are using OpenCensus trace library to start a Span with NeverSample option in ping method and recommendation is to use https://pkg.go.dev/cloud.google.com/go/internal/trace

// Start parent span that doesn't record.
_, span := octrace.StartSpan(ctx, "cloud.google.com/go/spanner.ping", octrace.WithSampler(octrace.NeverSample()))
defer span.End()

Code

Reproducer ``` package main import ( "context" "fmt" "log" "os" "time" "cloud.google.com/go/spanner" mexporter "github.com/GoogleCloudPlatform/opentelemetry-operations-go/exporter/metric" texporter "github.com/GoogleCloudPlatform/opentelemetry-operations-go/exporter/trace" "go.opentelemetry.io/otel" "go.opentelemetry.io/otel/propagation" "go.opentelemetry.io/otel/sdk/metric" "go.opentelemetry.io/otel/sdk/resource" sdktrace "go.opentelemetry.io/otel/sdk/trace" semconv "go.opentelemetry.io/otel/semconv/v1.17.0" ) // Player is a struct that represents a player in a game. type Player struct { FirstName string `spanner:"first_name"` LastName string `spanner:"last_name"` UUID string `spanner:"uuid"` Email string `spanner:"email"` } func newPlayers(ctx context.Context, client *spanner.Client, players ...*Player) error { var ml []*spanner.Mutation for _, player := range players { m, err := spanner.InsertStruct("Players", player) if err != nil { return err } ml = append(ml, m) } _, err := client.Apply(ctx, ml) return err } func newResource() (*resource.Resource, error) { return resource.Merge(resource.Default(), resource.NewWithAttributes(semconv.SchemaURL, semconv.ServiceName("my-service"), semconv.ServiceVersion("0.1.0"), )) } func main() { res, _ := newResource() const ( ProjectID, InstanceID, DatabaseID = "proj", "inst", "db" Database = "projects/" + ProjectID + "/instances/" + InstanceID + "/databases/" + DatabaseID ) // Set environment variable GOOGLE_API_GO_EXPERIMENTAL_TELEMETRY_PLATFORM_TRACING to "opentelemetry" os.Setenv("GOOGLE_API_GO_EXPERIMENTAL_TELEMETRY_PLATFORM_TRACING", "opentelemetry") // Setup trace exporter traceExporter, err := texporter.New( texporter.WithProjectID(ProjectID)) if err != nil { log.Fatalf("unable to set up tracing: %v", err) } tp := sdktrace.NewTracerProvider( sdktrace.WithResource(res), sdktrace.WithBatcher(traceExporter), sdktrace.WithSampler(sdktrace.AlwaysSample()), ) defer tp.Shutdown(context.Background()) otel.SetTracerProvider(tp) // Register the TraceContext propagator for propagating trace context. otel.SetTextMapPropagator(propagation.TraceContext{}) // Setup metrics exporter. metricExporter, err := mexporter.New( mexporter.WithProjectID(ProjectID)) if err != nil { log.Fatalf("unable to set up metrics: %v", err) } mp := metric.NewMeterProvider( metric.WithResource(res), metric.WithReader(metric.NewPeriodicReader(metricExporter)), ) ctx := context.Background() client, _ := spanner.NewClientWithConfig(ctx, Database, spanner.ClientConfig{ SessionPoolConfig: spanner.SessionPoolConfig{ MinOpened: 1, MaxOpened: 4, HealthCheckInterval: 3 * time.Second, }, // Set meter provider locally OpenTelemetryMeterProvider: mp, }) defer client.Close() playerCount := 1 for { // Sleep for 3 seconds. time.Sleep(time.Duration(3000 * time.Millisecond)) tracer := tp.Tracer("example.com/trace") ctx, span := tracer.Start(ctx, "create-player-go") players := []*Player{ {FirstName: "Kakashi", LastName: "Sensei", Email: "kakshi.sensei@example.org", UUID: "f1578551-eb4b-4ecd-aee2-9f97c37e164e"}, } up := fmt.Sprintf("%d-%d.", playerCount, time.Now().Unix()) for _, player := range players { player.Email = up + player.Email } if err := newPlayers(ctx, client, players...); err != nil { log.Printf("Creating newPlayers err: %v", err) } playerCount++ span.End() } } ```

Expected behavior

Traces of Spanner session ping shouldn't be visible in Trace explorer.

Actual behavior

Traces of Spanner session ping visible in Trace explorer.

Screenshots

image

harshachinta commented 4 months ago

Will work on this next week.

govargo commented 1 week ago

I'm facing same issue, noisy trace span by spanner client ping.

After I enabled end-to-end tracing by go client EnableEndToEndTracing, I observed many many noisy trace spans.

スクリーンショット 2024-10-23 14 22 46

Currently I use OpenTelemetry collector and enable OpenTelemetry collector tail sampling to avoid noisy ping traces, however, tail sampling doesn't work with EnableEndToEndTracing. It looks like the trace is being forced to appear.

I hope the tracing for each ping will be fixed. Thank you.

harshachinta commented 1 week ago

@govargo Will take a look next week