SigNoz / signoz

SigNoz is an open-source observability platform native to OpenTelemetry with logs, traces and metrics in a single application. An open-source alternative to DataDog, NewRelic, etc. πŸ”₯ πŸ–₯. πŸ‘‰ Open source Application Performance Monitoring (APM) & Observability tool
https://signoz.io
Other
18.81k stars 1.23k forks source link

CORS issue when trying to hit /v1/traces endpoint #3835

Open sunnykantkr opened 11 months ago

sunnykantkr commented 11 months ago

Bug description

I am getting CORS issue when trying to hit /v1/traces endpoint from an Angular 16 application.

I followed the instructions on this and this pages. I am getting issue, Access to resource at 'http://10.30.71.234:3301/v1/traces' from origin 'http://localhost:20200' has been blocked by CORS policy: Response to preflight request doesn't pass access control check: No 'Access-Control-Allow-Origin' header is present on the requested resource. I have enabled the cors through wildcard entries in otel config under Receiving.

image

Expected Behavior

/v1/traces endpoint should be called successfully and telemetry information should be visible on Signoz dashboard in traces section.

How to reproduce

  1. Install Signoz
  2. Create an Angular 16 app
  3. Enable CORS in the OTel Receiver
  4. Instrument Angular app with OpenTelemetry
  5. Update app.module.ts file
  6. Set the IP of the machine where SigNoz is installed.
  7. Update the same in app.module.ts
  8. Perform some API call
  9. In browser dev tools, under network check the /v1/traces giving CORS issue

Version information

My Otel Config has only this,

image

Thank you for your bug report – we love squashing them!

welcome[bot] commented 11 months ago

Thanks for opening this issue. A team member should give feedback soon. In the meantime, feel free to check out the contributing guidelines.

Gabsii commented 10 months ago

Any update on this? I get the same error with the following configuration:

# deploy/docker/clickhouse-setup/otel-collector-config.yaml
receivers:
  tcplog/docker:
    listen_address: "0.0.0.0:2255"
    operators:
      - type: regex_parser
        regex: '^<([0-9]+)>[0-9]+ (?P<timestamp>[0-9]{4}-[0-9]{2}-[0-9]{2}T[0-9]{2}:[0-9]{2}:[0-9]{2}(\.[0-9]+)?([zZ]|([\+-])([01]\d|2[0-3]):?([0-5]\d)?)?) (?P<container_id>\S+) (?P<container_name>\S+) [0-9]+ - -( (?P<body>.*))?'
        timestamp:
          parse_from: attributes.timestamp
          layout: '%Y-%m-%dT%H:%M:%S.%LZ'
      - type: move
        from: attributes["body"]
        to: body
      - type: remove
        field: attributes.timestamp
        # please remove names from below if you want to collect logs from them
      - type: filter
        id: signoz_logs_filter
        expr: 'attributes.container_name matches "^signoz-(logspout|frontend|alertmanager|query-service|otel-collector|otel-collector-metrics|clickhouse|zookeeper)"'
  opencensus:
    endpoint: 0.0.0.0:55678
  otlp/spanmetrics:
    protocols:
      grpc:
        endpoint: localhost:12345
  otlp:
    protocols:
      grpc:
        endpoint: 0.0.0.0:4317
      http:
        endpoint: 0.0.0.0:4318
        cors:
          allowed_origins:
            - "*"
  jaeger:
    protocols:
      grpc:
        endpoint: 0.0.0.0:14250
      thrift_http:
        endpoint: 0.0.0.0:14268
      # thrift_compact:
      #   endpoint: 0.0.0.0:6831
      # thrift_binary:
      #   endpoint: 0.0.0.0:6832
  hostmetrics:
    collection_interval: 30s
    scrapers:
      cpu: {}
      load: {}
      memory: {}
      disk: {}
      filesystem: {}
      network: {}
  prometheus:
    config:
      global:
        scrape_interval: 60s
      scrape_configs:
        # otel-collector internal metrics
        - job_name: otel-collector
          static_configs:
          - targets:
              - localhost:8888
            labels:
              job_name: otel-collector

processors:
  batch:
    send_batch_size: 10000
    send_batch_max_size: 11000
    timeout: 10s
  signozspanmetrics/prometheus:
    metrics_exporter: prometheus
    latency_histogram_buckets: [100us, 1ms, 2ms, 6ms, 10ms, 50ms, 100ms, 250ms, 500ms, 1000ms, 1400ms, 2000ms, 5s, 10s, 20s, 40s, 60s ]
    dimensions_cache_size: 100000
    dimensions:
      - name: service.namespace
        default: default
      - name: deployment.environment
        default: default
      # This is added to ensure the uniqueness of the timeseries
      # Otherwise, identical timeseries produced by multiple replicas of
      # collectors result in incorrect APM metrics
      - name: 'signoz.collector.id'
  # memory_limiter:
  #   # 80% of maximum memory up to 2G
  #   limit_mib: 1500
  #   # 25% of limit up to 2G
  #   spike_limit_mib: 512
  #   check_interval: 5s
  #
  #   # 50% of the maximum memory
  #   limit_percentage: 50
  #   # 20% of max memory usage spike expected
  #   spike_limit_percentage: 20
  # queued_retry:
  #   num_workers: 4
  #   queue_size: 100
  #   retry_on_failure: true
  resourcedetection:
    # Using OTEL_RESOURCE_ATTRIBUTES envvar, env detector adds custom labels.
    detectors: [env, system] # include ec2 for AWS, gcp for GCP and azure for Azure.
    timeout: 2s

extensions:
  health_check:
    endpoint: 0.0.0.0:13133
  zpages:
    endpoint: 0.0.0.0:55679
  pprof:
    endpoint: 0.0.0.0:1777

exporters:
  clickhousetraces:
    datasource: tcp://clickhouse:9000/?database=signoz_traces
    docker_multi_node_cluster: ${DOCKER_MULTI_NODE_CLUSTER}
    low_cardinal_exception_grouping: ${LOW_CARDINAL_EXCEPTION_GROUPING}
  clickhousemetricswrite:
    endpoint: tcp://clickhouse:9000/?database=signoz_metrics
    resource_to_telemetry_conversion:
      enabled: true
  clickhousemetricswrite/prometheus:
    endpoint: tcp://clickhouse:9000/?database=signoz_metrics
  prometheus:
    endpoint: 0.0.0.0:8889
  # logging: {}

  clickhouselogsexporter:
    dsn: tcp://clickhouse:9000/
    docker_multi_node_cluster: ${DOCKER_MULTI_NODE_CLUSTER}
    timeout: 5s
    sending_queue:
      queue_size: 100
    retry_on_failure:
      enabled: true
      initial_interval: 5s
      max_interval: 30s
      max_elapsed_time: 300s

service:
  telemetry:
    metrics:
      address: 0.0.0.0:8888
  extensions:
    - health_check
    - zpages
    - pprof
  pipelines:
    traces:
      receivers: [jaeger, otlp]
      processors: [signozspanmetrics/prometheus, batch]
      exporters: [clickhousetraces]
    metrics:
      receivers: [otlp]
      processors: [batch]
      exporters: [clickhousemetricswrite]
    metrics/generic:
      receivers: [hostmetrics]
      processors: [resourcedetection, batch]
      exporters: [clickhousemetricswrite]
    metrics/prometheus:
      receivers: [prometheus]
      processors: [batch]
      exporters: [clickhousemetricswrite/prometheus]
    metrics/spanmetrics:
      receivers: [otlp/spanmetrics]
      exporters: [prometheus]
    logs:
      receivers: [otlp, tcplog/docker]
      processors: [batch]
      exporters: [clickhouselogsexporter]
mgiammarco commented 10 months ago

We have the same problem, no matter what we put on cors: we always get the error.

samkit-jain commented 9 months ago

Any update on this? I am facing the same issue too. Tried with *, http://localhost:3000, localhost:3000, localhost*, ...

ankitnayan commented 9 months ago

We should look into this. Seems like many are facing this issue. @prashant-shahi @YounixM please take a look

samkit-jain commented 9 months ago

Update: I did some digging and was able to resolve the issue by allowing all headers like

receivers:
  otlp:
    protocols:
      http:
        cors:
          allowed_origins:
            - https://foo.bar.com
            - http://localhost*
          allowed_headers:
            - '*'
        endpoint: 0.0.0.0:4318

I was using auth on my Signoz 4318 endpoint and was passing the Authorization header in the collector request. I think that since that header was missing from the default allowed headers list, the request was failing on the browser.

makeavish commented 9 months ago

@mgiammarco @Gabsii Can you please try and let us know if it worked for you?

rlaveycal commented 9 months ago

I'm using ingress-nginx and had to set the CORS headers to DNT,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type,Range,Authorization,Traceparent,Tracestate for its CORS handling to work (adding the Trace* headers to the default list)

Gowtham029 commented 9 months ago

I fixed it by adding the secure:false in proxy.conf.json.

app.module.ts

OpenTelemetryInterceptorModule.forRoot({
      commonConfig: {
        console: true, // Display trace on console (only in DEV env)
        production: false, // Send Trace with BatchSpanProcessor (true) or SimpleSpanProcessor (false)
        serviceName: 'Angular Sample App', // Service name send in trace
        probabilitySampler: '1',
      },
      otelcolConfig: {
        url: 'http://127.0.0.1:4318/v1/traces', // URL of opentelemetry collector
      },
    })

proxy.conf.json


{
  "/signoz-api/**": {
    "target": "http://localhost:4318",
    "secure": false
  }
}
krakz999 commented 7 months ago

Make sure you are thitting /v1/traces on the CORS enabled port of SignOz (Port 4318, not 3301)