grafana / loki

Like Prometheus, but for logs.
https://grafana.com/loki
GNU Affero General Public License v3.0
23.82k stars 3.43k forks source link

Loki Reports to stats.grafana.org despite `analytics.reporting_enabled: false` #10648

Open tomswartz07 opened 1 year ago

tomswartz07 commented 1 year ago

Describe the bug Loki attempts to report usage analytics to stats.grafana.org despite the reporting_enabled value being set to False.

Additionally, if the analytics.usage_stats_url is set to a null string: "", Loki will not honor this value and report to stats.grafana.org, as well.

To Reproduce Steps to reproduce the behavior:

  1. Started Loki (SHA: a63be545e811 Docker image latest tag)
  2. Ensure that stats.grafana.org is blocked by PiHole/DNS service.
  3. Wait several hours.
  4. Observe that reporter.go emits hundreds of errors indicating that the connection is refused (rightfully so, in my case)
level=info ts=2023-09-19T16:01:39.855333051Z caller=reporter.go:303 msg="failed to send usage report" retries=0 err="Post \"https://stats.grafana.org/loki-usage-report\": dial tcp 0.0.0.0:443: connect: connection refused"
level=info ts=2023-09-19T16:01:40.859555584Z caller=reporter.go:303 msg="failed to send usage report" retries=1 err="Post \"https://stats.grafana.org/loki-usage-report\": dial tcp 0.0.0.0:443: connect: connection refused"
level=info ts=2023-09-19T16:01:43.863365631Z caller=reporter.go:303 msg="failed to send usage report" retries=2 err="Post \"https://stats.grafana.org/loki-usage-report\": dial tcp 0.0.0.0:443: connect: connection refused"
level=info ts=2023-09-19T16:01:50.906529945Z caller=reporter.go:303 msg="failed to send usage report" retries=3 err="Post \"https://stats.grafana.org/loki-usage-report\": dial tcp 0.0.0.0:443: connect: connection refused"

The indicated line in the error: reporter.go:303 appears to refer to a backoff which shouldn't be triggered in the first place if reporting is disabled.

In the span of a 30 minutes, this generated 228 instances of this error, and 380 failed DNS lookups. image

Expected behavior Loki should not attempt to report stats if analytics.reporting_enabled is set to false.

Environment:

Screenshots, Promtail config, or terminal output

Amended loki.config file: This config file is mounted at /etc/loki/local-config.yml as a read-only file within the docker container.

auth_enabled: false
analytics:
  reporting_enabled: false
  usage_stats_url: ""

server:
  http_listen_address: IP_ADDRESS
  http_listen_port: 9080
  grpc_listen_port: 9096

common:
  path_prefix: /tmp/loki
  storage:
    filesystem:
      chunks_directory: /tmp/loki/chunks
      rules_directory: /tmp/loki/rules
  replication_factor: 1
  ring:
    instance_addr: 127.0.0.1
    kvstore:
      store: inmemory

schema_config:
  configs:
    - from: 2020-10-24
      store: boltdb-shipper
      object_store: filesystem
      schema: v11
      index:
        prefix: index_
        period: 24h

ruler:
  alertmanager_url: http://IP_ADDRESS:9093

Deployed in Docker, with the following ENV Vars:

            "Env": [
                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
                "TZ=America/New_York"
            ],

I can confirm that I have both restarted and created a new container after applying the expected settings, but Loki does not appear to honor these parameters.

To be completely transparent: I've stopped using Loki unless and until this issue can be resolved. This issue represents a breach of trust, and considering the purpose of this tool is to handle sensitive log files, I trust you understand the implications of this issue.

rwjack commented 1 year ago

Did you also disable stats reporting for the Grafana container?

I disabled both and the requests stopped.

tomswartz07 commented 12 months ago

Hi @rwjack - I can confirm that both are disabled. Further, I only observe the stats.grafana.com DNS lookups when the Loki container is running.

I appreciate the insight!

butschi84 commented 12 months ago

I got the same problem as described. (loki distributed helm chart 0.75, loki version 2.9.1).

structuredConfig

analytics:
      reporting_enabled: false

Despite the setting, I see in the logs of the compactor (for example) that it tries to upload usage statistics (which fails because of our network setup). Interesting - when I use the extraArgs instead, the problem does not to occur:

example

compactor:
  extraArgs:
    - -reporting.enabled=0
tomswartz07 commented 12 months ago

Hi @butschi84 thanks for the info. I'm trying the container with the config you've suggested.

From my experience, it takes a few hours until the DNS lookups for stats.grafana.com begin, so it somewhat tracks with the Compactor settings (as the compactor wont start until after several hours of log collection).

I'll report back with my findings. 🤞🏻

tomswartz07 commented 12 months ago

Still no luck, with the compactor settings also defining the reporting to be 'off', Loki still appears to disregard the option after about 5 hours.

compactor:
  extraArgs:
    - -reporting.enabled=0

2023-11-07_14-32

You can see the notable jump in blocked DNS requests for these attempts at reporting. Pretty sneaky that there's an element of delay to this issue.

2023-11-07_14-33

This being said, I do believe that it seems to be associated with the compactor, given the proximity of messages in the logs (truncated slightly here for clarity and privacy):

level=info ts=2023-11-07T18:25:02.639753389Z caller=table_manager.go:271 index-store=boltdb-shipper-2020-10-24 msg="query readiness setup completed" duration=2.54µs distinct_users_len=0 distinct_users=
level=info ts=2023-11-07T18:25:02.639768489Z caller=table_manager.go:244 index-store=boltdb-shipper-2020-10-24 msg="cleaning tables cache"
level=info ts=2023-11-07T18:25:02.639777569Z caller=table_manager.go:247 index-store=boltdb-shipper-2020-10-24 msg="cleaning up expired table index_19668"
level=info ts=2023-11-07T18:25:02.639786939Z caller=table_manager.go:247 index-store=boltdb-shipper-2020-10-24 msg="cleaning up expired table index_19667"
level=info ts=2023-11-07T18:25:02.639799879Z caller=table_manager.go:247 index-store=boltdb-shipper-2020-10-24 msg="cleaning up expired table index_19666"
level=info ts=2023-11-07T18:25:02.639822159Z caller=table_manager.go:171 index-store=boltdb-shipper-2020-10-24 msg="handing over indexes to shipper"
level=info ts=2023-11-07T18:25:02.639884129Z caller=table.go:318 msg="handing over indexes to shipper index_19668"
level=info ts=2023-11-07T18:25:02.639902429Z caller=table.go:334 msg="finished handing over table index_19668"
level=info ts=2023-11-07T18:25:02.647387341Z caller=checkpoint.go:611 msg="starting checkpoint"
level=info ts=2023-11-07T18:25:02.670913855Z caller=checkpoint.go:336 msg="attempting checkpoint for" dir=/loki/wal/checkpoint.000047
ts=2023-11-07T18:25:07.648736838Z caller=spanlogger.go:86 level=info msg="building table names cache"
ts=2023-11-07T18:25:07.648939548Z caller=spanlogger.go:86 level=info msg="table names cache built" duration=122.55µs
level=info ts=2023-11-07T18:25:07.648971508Z caller=compactor.go:683 msg="compacting table" table-name=index_19668
ts=2023-11-07T18:25:07.649119718Z caller=spanlogger.go:86 level=info msg="building table cache"
ts=2023-11-07T18:25:07.649183468Z caller=spanlogger.go:86 level=info msg="table cache built" duration=52.58µs
level=info ts=2023-11-07T18:25:07.649197298Z caller=table.go:132 table-name=index_19668 msg="listed files" count=2
level=info ts=2023-11-07T18:25:07.649223238Z caller=table_compactor.go:325 table-name=index_19668 msg="using compactor-1699380307.gz as seed file"
level=info ts=2023-11-07T18:25:07.651047156Z caller=util.go:94 table-name=index_19668 file-name=compactor-1699380307.gz size="176 kB" msg="downloaded file" total_time=1.808508ms
level=info ts=2023-11-07T18:25:07.651460416Z caller=util.go:94 table-name=index_19668 file-name=grafana-loki1-1699367102638199182-1699380000.gz size="33 kB" msg="downloaded file" total_time=241.56µs
level=info ts=2023-11-07T18:25:07.799107123Z caller=index_set.go:269 table-name=index_19668 msg="removing source db files from storage" count=2
level=info ts=2023-11-07T18:25:07.799387462Z caller=compactor.go:688 msg="finished compacting table" table-name=index_19668
level=info ts=2023-11-07T18:25:31.385835535Z caller=flush.go:167 msg="flushing stream" 
level=info ts=2023-11-07T18:25:42.720993931Z caller=reporter.go:305 msg="failed to send usage report" retries=0 err="Post \"https://stats.grafana.org/loki-usage-report\": dial tcp 0.0.0.0:443: connect: connection refused"
level=info ts=2023-11-07T18:25:44.475193008Z caller=reporter.go:305 msg="failed to send usage report" retries=1 err="Post \"https://stats.grafana.org/loki-usage-report\": dial tcp 0.0.0.0:443: connect: connection refused"
level=info ts=2023-11-07T18:25:48.176588888Z caller=reporter.go:305 msg="failed to send usage report" retries=2 err="Post \"https://stats.grafana.org/loki-usage-report\": dial tcp 0.0.0.0:443: connect: connection refused"
level=info ts=2023-11-07T18:25:52.913777967Z caller=reporter.go:305 msg="failed to send usage report" retries=3 err="Post \"https://stats.grafana.org/loki-usage-report\": dial tcp 0.0.0.0:443: connect: connection refused"

Following this block of logs, the err="Post \"https://stats.grafana.org/loki-usage-report\": dial tcp 0.0.0.0:443: connect: connection refused" message is emitted roughly 3 times per second, until I stop the container/service.

butschi84 commented 12 months ago

@tomswartz07 I just checked and since I made the change with extraArgs yesterday, I see no failed to send usage report messages in the logs anymore in my loki. I keep an eye on it - there might be a very big delay until it starts sending. Right now I have it defined in extraArgs (for each and every loki component) as well as also in the structuredConfig.

tomswartz07 commented 9 months ago

I can confirm that this issue is still present in Loki v2.9.4 which was released yesterday.

loki, version 2.9.4 (branch: HEAD, revision: f599ebc535)                         
  build user:       root@5d9969758d88                                            
  build date:       2024-01-24T16:02:36Z                                         
  go version:       go1.21.3                                                     
  platform:         linux/amd64                                                  
  tags:             netgo         

The invalid HTTP requests for the metrics endpoint appear ~4 hours after the container starts and has been collecting logs.

level=info ts=2024-01-25T18:47:19.068622506Z caller=checkpoint.go:336 msg="attempting checkpoint for" dir=/loki/wal/checkpoint.000056
level=info ts=2024-01-25T18:47:19.283342241Z caller=index_set.go:107 msg="finished uploading table index_19747"
level=info ts=2024-01-25T18:47:19.283386911Z caller=index_set.go:185 msg="cleaning up unwanted indexes from table index_19747"
level=info ts=2024-01-25T18:47:23.593267535Z caller=reporter.go:305 msg="failed to send usage report" retries=4 err="Post \"https://stats.grafana.org/loki-usage-report\": dial tcp 0.0.0.0:443: connect: connection refused"
level=info ts=2024-01-25T18:47:23.593328365Z caller=reporter.go:281 msg="failed to report usage" err="5 errors: Post \"https://stats.grafana.org/loki-usage-report\": dial tcp 0.0.0.0:443: connect: connection refused; Post \"https://stats.grafana.org/loki-usage-report\": dial tcp 0.0.0.0:443: connect: connection refused; Post \"https://stats.grafana.org/loki-usage-report\": dial tcp 0.0.0.0:443: connect: connection refused; Post \"https://stats.grafana.org/loki-usage-report\": dial tcp 0.0.0.0:443: connect: connection refused; Post \"https://stats.grafana.org/loki-usage-report\": dial tcp 0.0.0.0:443: connect: connection refused"
tomswartz07 commented 9 months ago

For reference, my config remains largely unchanged since opening the issue, but copying it here to ensure continuity of info:

auth_enabled: false
analytics:
  reporting_enabled: false
  usage_stats_url: ""

compactor:
  extraArgs:
    - -reporting.enabled=0

server:
  http_listen_address: IP_ADDRESS
  http_listen_port: 9080
  grpc_listen_port: 9096

common:
  path_prefix: /tmp/loki
  storage:
    filesystem:
      chunks_directory: /tmp/loki/chunks
      rules_directory: /tmp/loki/rules
  replication_factor: 1
  ring:
    instance_addr: 127.0.0.1
    kvstore:
      store: inmemory

schema_config:
  configs:
    - from: 2020-10-24
      store: boltdb-shipper
      object_store: filesystem
      schema: v11
      index:
        prefix: index_
        period: 24h

ruler:
  alertmanager_url: http://IP_ADDRESS:9093
tomswartz07 commented 8 months ago

I can confirm the issue is also still present in Loki v2.9.5 which was released earlier today.

Same pattern, the invalid HTTP requests for the metrics endpoint appear ~4 hours after the container starts and has been collecting logs.

level=info ts=2024-02-29T17:53:57.615943375Z caller=reporter.go:305 msg="failed to send usage report" retries=4 err="Post \"https://stats.grafana.org/loki-usage-report\": dial tcp 0.0.0.0:443: connect: connection refused"
level=info ts=2024-02-29T17:53:57.616011525Z caller=reporter.go:281 msg="failed to report usage" err="5 errors: Post \"https://stats.grafana.org/loki-usage-report\": dial tcp 0.0.0.0:443: connect: connection refused; Post \"https://stats.grafana.org/loki-usage-report\": dial tcp 0.0.0.0:443: connect: connection refused; Post \"https://stats.grafana.org/loki-usage-report\": dial tcp 0.0.0.0:443: connect: connection refused; Post \"https://stats.grafana.org/loki-usage-report\": dial tcp 0.0.0.0:443: connect: connection refused; Post \"https://stats.grafana.org/loki-usage-report\": dial tcp 0.0.0.0:443: connect: connection refused"
level=info ts=2024-02-29T17:54:34.760006318Z caller=reporter.go:305 msg="failed to send usage report" retries=0 err="Post \"https://stats.grafana.org/loki-usage-report\": dial tcp 0.0.0.0:443: connect: connection refused"
level=info ts=2024-02-29T17:54:36.700532748Z caller=reporter.go:305 msg="failed to send usage report" retries=1 err="Post \"https://stats.grafana.org/loki-usage-report\": dial tcp 0.0.0.0:443: connect: connection refused"
level=info ts=2024-02-29T17:54:40.018499391Z caller=reporter.go:305 msg="failed to send usage report" retries=2 err="Post \"https://stats.grafana.org/loki-usage-report\": dial tcp 0.0.0.0:443: connect: connection refused"
level=info ts=2024-02-29T17:54:47.672090161Z caller=reporter.go:305 msg="failed to send usage report" retries=3 err="Post \"https://stats.grafana.org/loki-usage-report\": dial tcp 0.0.0.0:443: connect: connection refused"
tomswartz07 commented 7 months ago

I can again confirm that the issue remains present in the latest release of Loki:

/ $ loki --version
loki, version release-2.8.x-aa89d81 (branch: release-2.8.x, revision: aa89d81)
  build user:       root@buildkitsandbox
  build date:       2024-03-22T09:19:36Z
  go version:       go1.20.12
  platform:         linux/amd64

Same pattern as indicated here, the 'usage report' attempts begin ~4 hours after the start of the container, and follow the same pattern as discussed here, seemingly tied to the compactor functions.

I'll be more than happy to run with any sort of additional debug logging or whatever would be necessary for additional diagnosis.

tomswartz07 commented 6 months ago

I can, again, confirm that the issue remains unsolved in docker version of Loki v3.0.0:

docker exec -it grafana-loki1 /bin/sh
/ $ loki --version
loki, version 3.0.0 (branch: HEAD, revision: b4f7181c7a)
  build user:       root@58af37682501
  build date:       2024-04-08T19:20:58Z
  go version:       go1.21.9
  platform:         linux/amd64
  tags:             netgo
level=info ts=2024-04-09T16:04:52.618430342Z caller=table_manager.go:273 index-store=tsdb-2020-10-24 msg="query readiness setup completed" duration=2.13µs distinct_users_len=0 distinct_users=
level=info ts=2024-04-09T16:04:52.618444392Z caller=table_manager.go:246 index-store=tsdb-2020-10-24 msg="cleaning tables cache"
level=info ts=2024-04-09T16:04:52.618453532Z caller=table_manager.go:249 index-store=tsdb-2020-10-24 msg="cleaning up expired table index_19818"
level=info ts=2024-04-09T16:04:52.618463532Z caller=table_manager.go:249 index-store=tsdb-2020-10-24 msg="cleaning up expired table index_19817"
level=info ts=2024-04-09T16:04:52.618473042Z caller=table_manager.go:249 index-store=tsdb-2020-10-24 msg="cleaning up expired table index_19816"
level=info ts=2024-04-09T16:04:52.618482132Z caller=table_manager.go:249 index-store=tsdb-2020-10-24 msg="cleaning up expired table index_19815"
level=info ts=2024-04-09T16:04:52.618491152Z caller=table_manager.go:249 index-store=tsdb-2020-10-24 msg="cleaning up expired table index_19822"
level=info ts=2024-04-09T16:04:52.618502592Z caller=table_manager.go:249 index-store=tsdb-2020-10-24 msg="cleaning up expired table index_19821"
level=info ts=2024-04-09T16:04:52.618517162Z caller=table_manager.go:249 index-store=tsdb-2020-10-24 msg="cleaning up expired table index_19820"
level=info ts=2024-04-09T16:04:52.618529312Z caller=table_manager.go:249 index-store=tsdb-2020-10-24 msg="cleaning up expired table index_19819"
ts=2024-04-09T16:04:57.699651899Z caller=spanlogger.go:109 level=info msg="building table names cache"
ts=2024-04-09T16:04:57.699834079Z caller=spanlogger.go:109 level=info msg="table names cache built" duration=112.69µs
level=info ts=2024-04-09T16:04:57.699865379Z caller=compactor.go:765 msg="compacting table" table-name=index_19822
ts=2024-04-09T16:04:57.700038049Z caller=spanlogger.go:109 level=info msg="building table cache"
ts=2024-04-09T16:04:57.700124579Z caller=spanlogger.go:109 level=info msg="table cache built" duration=75.25µs
level=info ts=2024-04-09T16:04:57.700138939Z caller=table.go:132 table-name=index_19822 msg="listed files" count=1
level=info ts=2024-04-09T16:04:57.700452179Z caller=util.go:92 table-name=index_19822 file-name=1712677552-grafana-loki1-1712664292559880125.tsdb.gz size="2.9 kB" msg="downloaded file" total_time=219.99µs
level=info ts=2024-04-09T16:04:57.701488298Z caller=util.go:92 table-name=index_19822 user-id=fake user-id=fake file-name=1712678097872158175-compactor-1712663565121-1712677519344-6542849b.tsdb.gz size="66 kB" msg="downloaded file" total_time=798.219µs
level=info ts=2024-04-09T16:04:57.999335504Z caller=index_set.go:269 table-name=index_19822 user-id=fake user-id=fake msg="removing source db files from storage" count=1
level=info ts=2024-04-09T16:04:57.999625044Z caller=index_set.go:269 table-name=index_19822 msg="removing source db files from storage" count=1
level=info ts=2024-04-09T16:04:57.999937094Z caller=compactor.go:770 msg="finished compacting table" table-name=index_19822
level=info ts=2024-04-09T16:05:09.558141125Z caller=flush.go:176 component=ingester msg="flushing stream" user=fake fp=1a2a89cac9a7381b immediate=false num_chunks=1 labels="{filename=\"/var/log/auth.log\", hostname=\"debian\", job=\"varlogs\", service_name=\"varlogs\"}"
level=info ts=2024-04-09T16:05:09.558182365Z caller=flush.go:176 component=ingester msg="flushing stream" user=fake fp=7a2f7f4a73221e58 immediate=false num_chunks=1 labels="{filename=\"/var/log/syslog\", hostname=\"debian\", job=\"varlogs\", service_name=\"varlogs\"}"
level=info ts=2024-04-09T16:05:32.700319207Z caller=reporter.go:305 msg="failed to send usage report" retries=0 err="Post \"https://stats.grafana.org/loki-usage-report\": dial tcp 0.0.0.0:443: connect: connection refused"
level=info ts=2024-04-09T16:05:33.788182162Z caller=reporter.go:305 msg="failed to send usage report" retries=1 err="Post \"https://stats.grafana.org/loki-usage-report\": dial tcp 0.0.0.0:443: connect: connection refused"
level=info ts=2024-04-09T16:05:37.023147041Z caller=flush.go:176 component=ingester msg="flushing stream" user=fake fp=053d585d833adb4f immediate=false num_chunks=1 labels="{filename=\"/var/log/systemd/node_exporter.service.log\", hostname=\"ds1821\", job=\"synology\", service_name=\"synology\"}"
level=info ts=2024-04-09T16:05:37.664668953Z caller=reporter.go:305 msg="failed to send usage report" retries=2 err="Post \"https://stats.grafana.org/loki-usage-report\": dial tcp 0.0.0.0:443: connect: connection refused"
level=info ts=2024-04-09T16:05:44.622285546Z caller=reporter.go:305 msg="failed to send usage report" retries=3 err="Post \"https://stats.grafana.org/loki-usage-report\": dial tcp 0.0.0.0:443: connect: connection refused"
level=info ts=2024-04-09T16:05:52.55987202Z caller=table_manager.go:136 index-store=tsdb-2020-10-24 msg="uploading tables"
level=info ts=2024-04-09T16:05:52.55991949Z caller=index_set.go:86 msg="uploading table index_19822"
level=info ts=2024-04-09T16:05:52.55993386Z caller=index_set.go:107 msg="finished uploading table index_19822"
level=info ts=2024-04-09T16:05:52.55994916Z caller=index_set.go:185 msg="cleaning up unwanted indexes from table index_19822"
level=info ts=2024-04-09T16:05:55.774425017Z caller=reporter.go:305 msg="failed to send usage report" retries=4 err="Post \"https://stats.grafana.org/loki-usage-report\": dial tcp 0.0.0.0:443: connect: connection refused"
level=info ts=2024-04-09T16:05:55.774488707Z caller=reporter.go:281 msg="failed to report usage" err="5 errors: Post \"https://stats.grafana.org/loki-usage-report\": dial tcp 0.0.0.0:443: connect: connection refused; Post \"https://stats.grafana.org/loki-usage-report\": dial tcp 0.0.0.0:443: connect: connection refused; Post \"https://stats.grafana.org/loki-usage-report\": dial tcp 0.0.0.0:443: connect: connection refused; Post \"https://stats.grafana.org/loki-usage-report\": dial tcp 0.0.0.0:443: connect: connection refused; Post \"https://stats.grafana.org/loki-usage-report\": dial tcp 0.0.0.0:443: connect: connection refused"
level=info ts=2024-04-09T16:06:32.684925317Z caller=reporter.go:305 msg="failed to send usage report" retries=0 err="Post \"https://stats.grafana.org/loki-usage-report\": dial tcp 0.0.0.0:443: connect: connection refused"
level=info ts=2024-04-09T16:06:34.351524481Z caller=reporter.go:305 msg="failed to send usage report" retries=1 err="Post \"https://stats.grafana.org/loki-usage-report\": dial tcp 0.0.0.0:443: connect: connection refused"
level=info ts=2024-04-09T16:06:38.151997433Z caller=reporter.go:305 msg="failed to send usage report" retries=2 err="Post \"https://stats.grafana.org/loki-usage-report\": dial tcp 0.0.0.0:443: connect: connection refused"
level=info ts=2024-04-09T16:06:44.853920474Z caller=reporter.go:305 msg="failed to send usage report" retries=3 err="Post \"https://stats.grafana.org/loki-usage-report\": dial tcp 0.0.0.0:443: connect: connection refused"
level=info ts=2024-04-09T16:06:52.560613984Z caller=table_manager.go:136 index-store=tsdb-2020-10-24 msg="uploading tables"

Again, this issue only appears after several hours. In this particular case, Loki was running for nearly 4 hours exactly before the reporting was attempted.

level=info ts=2024-04-09T12:04:52.548845645Z caller=main.go:120 msg="Starting Loki" version="(version=3.0.0, branch=HEAD, revision=b4f7181c7a)"
jasonodonnell commented 6 months ago

@tomswartz07 I think this is a problem with the path where you mounted the config file. You mentioned the config was mounted here: /etc/loki/local-config.yml, but looking at the container it expects the file type to be .yaml

ErikEngerd commented 6 months ago

Is there a solution for this? As soon as it is fixed I can upgrade loki to that version. For the time being my network policy is blocking the analytics (which is how I found out about this functionality in the first place). Also, to be 'nice', I think analytics reporting should be opt-in rather than opt-out. Perhaps put the configuration for that at the to of the helm chart so that it gets noticed. Things like this can be a deal breaker if you would want to apply loki in a commercial deployment somewhere.

There are really a lot of connection attempts. Cilium is logging this (output of hubble observe -f)

Apr 29 21:02:58.353: monitoring/loki-0:43166 (ID:36727) <> 34.96.126.106:443 (ID:16777626) Policy denied DROPPED (TCP Flags: SYN)
Apr 29 21:03:01.769: monitoring/loki-0:50128 (ID:36727) <> 34.96.126.106:443 (ID:16777626) Policy denied DROPPED (TCP Flags: SYN)
Apr 29 21:03:02.801: monitoring/loki-0:50128 (ID:36727) <> 34.96.126.106:443 (ID:16777626) Policy denied DROPPED (TCP Flags: SYN)
Apr 29 21:03:04.817: monitoring/loki-0:50128 (ID:36727) <> 34.96.126.106:443 (ID:16777626) Policy denied DROPPED (TCP Flags: SYN)
Apr 29 21:03:08.858: monitoring/loki-0:56612 (ID:36727) <> 34.96.126.106:443 (ID:16777626) Policy denied DROPPED (TCP Flags: SYN)
Apr 29 21:03:09.872: monitoring/loki-0:56612 (ID:36727) <> 34.96.126.106:443 (ID:16777626) Policy denied DROPPED (TCP Flags: SYN)
Apr 29 21:03:11.888: monitoring/loki-0:56612 (ID:36727) <> 34.96.126.106:443 (ID:16777626) Policy denied DROPPED (TCP Flags: SYN)
Apr 29 21:03:20.402: monitoring/loki-0:47174 (ID:36727) <> 34.96.126.106:443 (ID:16777626) Policy denied DROPPED (TCP Flags: SYN)
thokich commented 5 months ago

I have the same problem and it ist not nice when loki send information to the internet to stats.grafana.org i have blocked it in my firewall, but a software who sends default information to an stats server is not nice. You should disable this function by default. You must have an OK from the user when your software sends statistic data to an server. Log info when the firewall block this. level=info ts=2024-05-31T23:45:09.34433078Z caller=reporter.go:305 msg="failed to send usage report" retries=3 err="Post \"https://stats.grafana.org/loki-usage-report\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)"

disable set in the loki-config.yaml --If you would like to disable reporting, uncomment the following lines: analytics: reporting_enabled: false

mohammed-uddin commented 5 months ago

usage / stats reporting should be an opt-in feature :)

SeaweedbrainCY commented 1 day ago

The issue seems to be fixed with version 3.1.2 (branch: HEAD, revision: 41a2ee77e8) (docker image 41dda0164596).

Adding

analytics:
  reporting_enabled: false
  usage_stats_url: ""

in the config file correctly stops the analytic requests.

Thus, I strongly advocate that this feature should not be enabled by default.