bitmagnet-io / bitmagnet

A self-hosted BitTorrent indexer, DHT crawler, content classifier and torrent search engine with web UI, GraphQL API and Servarr stack integration.
https://bitmagnet.io/
MIT License
2.38k stars 94 forks source link

High CPU usage, some query infinitely fails #289

Closed k4rli closed 1 month ago

k4rli commented 3 months ago

Describe the bug

When I start the container, some DB operation is infinitely failing and CPU usage + RAM go to 100% rapidly. Cannot use Bitmagnet anymore at all. This initially started happening ~3 weeks ago and I haven't used it since. Tried with latest image and still happens.

To Reproduce

Latest version Docker image and using the same database since the earliest versions of this.

Logs Some words in queries have been substituted with an asterisk. ```log bitmagnet-postgres | 2024-06-30 18:49:33.785 UTC [636] STATEMENT: WITH "cte" AS MATERIALIZED (SELECT *, ts_rank_cd(content.tsv, 'syren <-> de <-> mer <-> sharing <-> my <-> wife <-> r <-> n <-> centurion <-> * <-> * <-> * <-> *'::tsquery) AS _order_0 FROM "content" WHERE "content"."type" = 'xxx' AND ("content"."release_date" >= '2022-01-01 00:00:00' AND "content"."release_date" < '2023-01-01 00:00:00') AND content.tsv @@ '* <-> de <-> mer <-> * <-> my <-> * <-> r <-> n <-> * <-> * <-> * <-> * <-> *'::tsquery LIMIT 50000),"cte_count" AS MATERIALIZED (SELECT COUNT(*) AS total_count FROM cte) SELECT * FROM "cte" WHERE (SELECT MAX(total_count) FROM cte_count) < 50000 ORDER BY "_order_0" DESC LIMIT $1 bitmagnet-postgres | 2024-06-30 18:49:33.785 UTC [636] LOG: could not send data to client: Broken pipe bitmagnet-postgres | 2024-06-30 18:49:33.785 UTC [636] FATAL: connection to client lost bitmagnet-postgres | 2024-06-30 18:49:33.788 UTC [639] ERROR: canceling statement due to user request bitmagnet-postgres | 2024-06-30 18:49:33.788 UTC [639] STATEMENT: WITH "cte" AS MATERIALIZED (SELECT *, ts_rank_cd(content.tsv, '* <-> 20 <-> 12 <-> 02 <-> hime <-> marie <-> december'::tsquery) AS _order_0 FROM "content" WHERE "content"."type" = '*' AND ("content"."release_date" >= '2020-01-01 00:00:00' AND "content"."release_date" < '2021-01-01 00:00:00') AND content.tsv @@ '* <-> 20 <-> 12 <-> 02 <-> hime <-> marie <-> december'::tsquery LIMIT 50000),"cte_count" AS MATERIALIZED (SELECT COUNT(*) AS total_count FROM cte) SELECT * FROM "cte" WHERE (SELECT MAX(total_count) FROM cte_count) < 50000 ORDER BY "_order_0" DESC LIMIT $1 bitmagnet-postgres | 2024-06-30 18:49:33.795 UTC [642] ERROR: canceling statement due to user request bitmagnet-postgres | 2024-06-30 18:49:33.795 UTC [642] STATEMENT: WITH "cte" AS MATERIALIZED (SELECT *, ts_rank_cd(content.tsv, '* <-> * <-> deep <-> space <-> * <-> stagione <-> 5'::tsquery) AS _order_0 FROM "content" WHERE "content"."type" = 'movie' AND ("content"."release_date" >= '2002-01-01 00:00:00' AND "content"."release_date" < '2003-01-01 00:00:00') AND content.tsv @@ 'star <-> trek <-> deep <-> space <-> nine <-> stagione <-> 5'::tsquery LIMIT 50000),"cte_count" AS MATERIALIZED (SELECT COUNT(*) AS total_count FROM cte) SELECT * FROM "cte" WHERE (SELECT MAX(total_count) FROM cte_count) < 50000 ORDER BY "_order_0" DESC LIMIT $1 bitmagnet-postgres | 2024-06-30 18:49:33.806 UTC [645] ERROR: canceling statement due to user request bitmagnet-postgres | 2024-06-30 18:49:33.806 UTC [645] STATEMENT: WITH "cte" AS MATERIALIZED (SELECT *, ts_rank_cd(content.tsv, 'futbol <-> cheom <-> obzor <-> matchei <-> 8 <-> i <-> tur <-> 2 <-> i <-> den_sq_ <-> 16 <-> 10'::tsquery) AS _order_0 FROM "content" WHERE "content"."type" = 'tv_show' AND ("content"."release_date" >= '2023-01-01 00:00:00' AND "content"."release_date" < '2024-01-01 00:00:00') AND content.tsv @@ 'futbol <-> cheom <-> obzor <-> matchei <-> 8 <-> i <-> tur <-> 2 <-> i <-> den_sq_ <-> 16 <-> 10'::tsquery LIMIT 50000),"cte_count" AS MATERIALIZED (SELECT COUNT(*) AS total_count FROM cte) SELECT * FROM "cte" WHERE (SELECT MAX(total_count) FROM cte_count) < 50000 ORDER BY "_order_0" DESC LIMIT $1 bitmagnet-postgres | 2024-06-30 18:49:33.807 UTC [645] LOG: could not send data to client: Broken pipe bitmagnet-postgres | 2024-06-30 18:49:33.807 UTC [645] FATAL: connection to client lost bitmagnet-postgres | 2024-06-30 18:49:34.179 UTC [653] ERROR: canceling statement due to user request bitmagnet-postgres | 2024-06-30 18:49:34.179 UTC [653] STATEMENT: WITH "cte" AS MATERIALIZED (SELECT *, ts_rank_cd(content.tsv, '2021 <-> 8 <-> 23 <-> Huan <-> Qi <-> Tan <-> Hua <-> Di <-> Yi <-> Chang <-> Shou <-> Fei <-> Fang <-> 288 <-> Jin <-> Bi <-> Jing <-> Pin <-> Bao <-> Ma <-> Yan <-> Jing <-> Nu <-> Yan <-> Jiu <-> Sheng <-> Sao <-> De <-> Bu <-> Yao <-> Bu <-> Yao <-> Gong <-> Wu <-> Yuan <-> Fu <-> Qi <-> Fan <-> Chang <-> Xiu'::tsquery) AS _order_0 FROM "content" WHERE "content"."type" = 'tv_show' AND ("content"."release_date" >= '2021-01-01 00:00:00' AND "content"."release_date" < '2022-01-01 00:00:00') AND content.tsv @@ '2021 <-> 8 <-> 23 <-> Huan <-> Qi <-> Tan <-> Hua <-> Di <-> Yi <-> Chang <-> Shou <-> Fei <-> Fang <-> 288 <-> Jin <-> Bi <-> Jing <-> Pin <-> Bao <-> Ma <-> Yan <-> Jing <-> Nu <-> Yan <-> Jiu <-> Sheng <-> Sao <-> De <-> Bu <-> Yao <-> Bu <-> Yao <-> Gong <-> Wu <-> Yuan <-> Fu <-> Qi <-> Fan <-> Chang <-> Xiu'::tsquery LIMIT 50000),"cte_count" AS MATERIALIZED (SELECT COUNT(*) AS total_count FROM cte) SELECT * FROM "cte" WHERE (SELECT MAX(total_count) FROM cte_count) < 50000 ORDER BY "_order_0" DESC LIMIT $1 bitmagnet-postgres | 2024-06-30 18:49:34.179 UTC [653] LOG: could not send data to client: Broken pipe bitmagnet-postgres | 2024-06-30 18:49:34.179 UTC [653] FATAL: connection to client lost bitmagnet-postgres | 2024-06-30 18:49:34.183 UTC [651] ERROR: canceling statement due to user request bitmagnet-postgres | 2024-06-30 18:49:34.183 UTC [651] STATEMENT: WITH "cte" AS MATERIALIZED (SELECT *, ts_rank_cd(content.tsv, 'milfslikeitbig <-> nina <-> elle <-> have <-> your <-> cock <-> and <-> eat <-> it <-> too <-> 21 <-> 08'::tsquery) AS _order_0 FROM "content" WHERE "content"."type" = 'xxx' AND ("content"."release_date" >= '2018-01-01 00:00:00' AND "content"."release_date" < '2019-01-01 00:00:00') AND content.tsv @@ '* <-> * <-> * <-> have <-> your <-> * <-> and <-> eat <-> it <-> too <-> 21 <-> 08'::tsquery LIMIT 50000),"cte_count" AS MATERIALIZED (SELECT COUNT(*) AS total_count FROM cte) SELECT * FROM "cte" WHERE (SELECT MAX(total_count) FROM cte_count) < 50000 ORDER BY "_order_0" DESC LIMIT $1 bitmagnet-postgres | 2024-06-30 18:49:34.183 UTC [651] LOG: could not send data to client: Broken pipe bitmagnet-postgres | 2024-06-30 18:49:34.183 UTC [651] FATAL: connection to client lost bitmagnet-postgres | 2024-06-30 18:49:34.218 UTC [657] ERROR: canceling statement due to user request bitmagnet-postgres | 2024-06-30 18:49:34.218 UTC [657] STATEMENT: SELECT EXISTS(SELECT * FROM "content" WHERE "content"."type" = 'xxx' AND ("content"."release_date" >= '2012-01-01 00:00:00' AND "content"."release_date" < '2013-01-01 00:00:00') AND content.tsv @@ 'brazzers <-> big <-> tits <-> in <-> uniform <-> shyla <-> stylez <-> happy <-> fuck <-> day <-> september <-> 22'::tsquery) bitmagnet-postgres | 2024-06-30 18:49:34.218 UTC [657] LOG: could not send data to client: Broken pipe bitmagnet-postgres | 2024-06-30 18:49:34.218 UTC [657] FATAL: connection to client lost bitmagnet-postgres | 2024-06-30 18:49:34.549 UTC [663] ERROR: canceling statement due to user request bitmagnet-postgres | 2024-06-30 18:49:34.549 UTC [663] STATEMENT: WITH "cte" AS MATERIALIZED (SELECT *, ts_rank_cd(content.tsv, 'judith <-> szucs <-> mtv'::tsquery) AS _order_0 FROM "content" WHERE "content"."type" = 'movie' AND ("content"."release_date" >= '2003-01-01 00:00:00' AND "content"."release_date" < '2004-01-01 00:00:00') AND content.tsv @@ 'judith <-> szucs <-> mtv'::tsquery LIMIT 50000),"cte_count" AS MATERIALIZED (SELECT COUNT(*) AS total_count FROM cte) SELECT * FROM "cte" WHERE (SELECT MAX(total_count) FROM cte_count) < 50000 ORDER BY "_order_0" DESC LIMIT $1 bitmagnet-postgres | 2024-06-30 18:49:34.577 UTC [668] ERROR: canceling statement due to user request bitmagnet-postgres | 2024-06-30 18:49:34.577 UTC [668] STATEMENT: SELECT EXISTS(SELECT * FROM "content" WHERE "content"."type" = 'tv_show' AND ("content"."release_date" >= '2020-01-01 00:00:00' AND "content"."release_date" < '2021-01-01 00:00:00') AND content.tsv @@ 'yinyleon <-> aamteur <-> wife <-> is <-> tenderly <-> awakened <-> 07 <-> 13 <-> 20 <-> mp4'::tsquery) bitmagnet-postgres | 2024-06-30 18:49:34.577 UTC [668] LOG: could not send data to client: Broken pipe bitmagnet-postgres | 2024-06-30 18:49:34.577 UTC [668] FATAL: connection to client lost bitmagnet-postgres | 2024-06-30 18:49:34.911 UTC [675] ERROR: canceling statement due to user request bitmagnet-postgres | 2024-06-30 18:49:34.911 UTC [675] STATEMENT: WITH "cte" AS MATERIALIZED (SELECT *, ts_rank_cd(content.tsv, 'Hua <-> Sheng <-> Jiang <-> Xi <-> Ying <-> the <-> peanut <-> butter <-> falcon'::tsquery) AS _order_0 FROM "content" WHERE "content"."type" = 'movie' AND ("content"."release_date" >= '2019-01-01 00:00:00' AND "content"."release_date" < '2020-01-01 00:00:00') AND content.tsv @@ 'Hua <-> Sheng <-> Jiang <-> Xi <-> Ying <-> the <-> peanut <-> butter <-> falcon'::tsquery LIMIT 50000),"cte_count" AS MATERIALIZED (SELECT COUNT(*) AS total_count FROM cte) SELECT * FROM "cte" WHERE (SELECT MAX(total_count) FROM cte_count) < 50000 ORDER BY "_order_0" DESC LIMIT $1 bitmagnet-postgres | 2024-06-30 18:49:34.917 UTC [678] ERROR: canceling statement due to user request bitmagnet-postgres | 2024-06-30 18:49:34.917 UTC [678] STATEMENT: WITH "cte" AS MATERIALIZED (SELECT *, ts_rank_cd(content.tsv, 'sniper <-> g <-> r <-> i <-> t <-> global <-> response <-> and <-> intelligence <-> team'::tsquery) AS _order_0 FROM "content" WHERE "content"."type" = 'movie' AND ("content"."release_date" >= '2023-01-01 00:00:00' AND "content"."release_date" < '2024-01-01 00:00:00') AND content.tsv @@ 'sniper <-> g <-> r <-> i <-> t <-> global <-> response <-> and <-> intelligence <-> team'::tsquery LIMIT 50000),"cte_count" AS MATERIALIZED (SELECT COUNT(*) AS total_count FROM cte) SELECT * FROM "cte" WHERE (SELECT MAX(total_count) FROM cte_count) < 50000 ORDER BY "_order_0" DESC LIMIT $1 bitmagnet-postgres | 2024-06-30 18:49:34.968 UTC [666] ERROR: canceling statement due to user request bitmagnet-postgres | 2024-06-30 18:49:34.968 UTC [666] STATEMENT: SELECT EXISTS(SELECT * FROM "content" WHERE "content"."type" = 'xxx' AND ("content"."release_date" >= '2018-01-01 00:00:00' AND "content"."release_date" < '2019-01-01 00:00:00') AND content.tsv @@ '* <-> aka <-> * <-> * <-> ria <-> blonde <-> * <-> * <-> * <-> enjoys <-> * <-> in <-> * <-> * <-> 09 <-> 04'::tsquery) bitmagnet-postgres | 2024-06-30 18:49:34.969 UTC [666] LOG: could not send data to client: Broken pipe bitmagnet-postgres | 2024-06-30 18:49:34.969 UTC [666] FATAL: connection to client lost bitmagnet-postgres | 2024-06-30 18:49:34.970 UTC [696] ERROR: canceling statement due to user request bitmagnet-postgres | 2024-06-30 18:49:34.970 UTC [696] STATEMENT: WITH "cte" AS MATERIALIZED (SELECT *, ts_rank_cd(content.tsv, 'sui <-> dhaaga'::tsquery) AS _order_0 FROM "content" WHERE "content"."type" = 'movie' AND ("content"."release_date" >= '2018-01-01 00:00:00' AND "content"."release_date" < '2019-01-01 00:00:00') AND content.tsv @@ 'sui <-> dhaaga'::tsquery LIMIT 50000),"cte_count" AS MATERIALIZED (SELECT COUNT(*) AS total_count FROM cte) SELECT * FROM "cte" WHERE (SELECT MAX(total_count) FROM cte_count) < 50000 ORDER BY "_order_0" DESC LIMIT $1 bitmagnet-postgres | 2024-06-30 18:49:34.994 UTC [704] ERROR: canceling statement due to user request bitmagnet-postgres | 2024-06-30 18:49:34.994 UTC [704] STATEMENT: SELECT *, ts_rank_cd(content.tsv, $1::tsquery) AS _order_0 FROM "content" WHERE "content"."type" = $2 AND content.tsv @@ $3::tsquery ORDER BY "_order_0" DESC LIMIT $4 bitmagnet-postgres | 2024-06-30 18:49:34.999 UTC [702] ERROR: canceling statement due to user request bitmagnet-postgres | 2024-06-30 18:49:34.999 UTC [702] STATEMENT: WITH "cte" AS MATERIALIZED (SELECT *, ts_rank_cd(content.tsv, '* <-> *'::tsquery) AS _order_0 FROM "content" WHERE "content"."type" = 'movie' AND ("content"."release_date" >= '2020-01-01 00:00:00' AND "content"."release_date" < '2021-01-01 00:00:00') AND content.tsv @@ '* <-> *'::tsquery LIMIT 50000),"cte_count" AS MATERIALIZED (SELECT COUNT(*) AS total_count FROM cte) SELECT * FROM "cte" WHERE (SELECT MAX(total_count) FROM cte_count) < 50000 ORDER BY "_order_0" DESC LIMIT $1 bitmagnet-postgres | 2024-06-30 18:49:35.372 UTC [708] ERROR: canceling statement due to user request bitmagnet-postgres | 2024-06-30 18:49:35.372 UTC [708] STATEMENT: SELECT *, ts_rank_cd(content.tsv, $1::tsquery) AS _order_0 FROM "content" WHERE "content"."type" = $2 AND ("content"."release_date" >= $3 AND "content"."release_date" < $4) AND content.tsv @@ $5::tsquery ORDER BY "_order_0" DESC LIMIT $6 bitmagnet-postgres | 2024-06-30 18:49:35.393 UTC [711] ERROR: canceling statement due to user request bitmagnet-postgres | 2024-06-30 18:49:35.393 UTC [711] STATEMENT: WITH "cte" AS MATERIALIZED (SELECT *, ts_rank_cd(content.tsv, 'top <-> gear <-> audio <-> e <-> video <-> sincronizzati <-> 12x08'::tsquery) AS _order_0 FROM "content" WHERE "content"."type" = 'tv_show' AND ("content"."release_date" >= '2008-01-01 00:00:00' AND "content"."release_date" < '2009-01-01 00:00:00') AND content.tsv @@ 'top <-> gear <-> audio <-> e <-> video <-> sincronizzati <-> 12x08'::tsquery LIMIT 50000),"cte_count" AS MATERIALIZED (SELECT COUNT(*) AS total_count FROM cte) SELECT * FROM "cte" WHERE (SELECT MAX(total_count) FROM cte_count) < 50000 ORDER BY "_order_0" DESC LIMIT $1 bitmagnet-postgres | 2024-06-30 18:49:35.405 UTC [693] ERROR: canceling statement due to user request bitmagnet-postgres | 2024-06-30 18:49:35.405 UTC [693] STATEMENT: SELECT EXISTS(SELECT * FROM "content" WHERE "content"."type" = 'tv_show' AND content.tsv @@ 'www <-> scenetime <-> com <-> * <-> *'::tsquery) bitmagnet-postgres | 2024-06-30 18:49:35.428 UTC [717] ERROR: canceling statement due to user request bitmagnet-postgres | 2024-06-30 18:49:35.428 UTC [717] STATEMENT: WITH "cte" AS MATERIALIZED (SELECT *, ts_rank_cd(content.tsv, 'ho <-> mu <-> re <-> su <-> Zhong <-> Xue <-> Sheng <-> 08 <-> 07 <-> 12'::tsquery) AS _order_0 FROM "content" WHERE "content"."type" = 'tv_show' AND ("content"."release_date" >= '2008-01-01 00:00:00' AND "content"."release_date" < '2009-01-01 00:00:00') AND content.tsv @@ 'ho <-> mu <-> re <-> su <-> Zhong <-> Xue <-> Sheng <-> 08 <-> 07 <-> 12'::tsquery LIMIT 50000),"cte_count" AS MATERIALIZED (SELECT COUNT(*) AS total_count FROM cte) SELECT * FROM "cte" WHERE (SELECT MAX(total_count) FROM cte_count) < 50000 ORDER BY "_order_0" DESC LIMIT $1 bitmagnet-postgres | 2024-06-30 18:49:35.429 UTC [717] LOG: could not send data to client: Broken pipe bitmagnet-postgres | 2024-06-30 18:49:35.429 UTC [717] FATAL: connection to client lost ^Ccanceled ```
My docker-compose.yml ``` services: bitmagnet: image: ghcr.io/bitmagnet-io/bitmagnet:latest container_name: bitmagnet ports: - "3333:3333" restart: unless-stopped environment: - POSTGRES_HOST=bitmagnet-postgres - POSTGRES_PASSWORD=postgres - REDIS_ADDR=bitmagnet-redis:6379 - TMDB_API_KEY=private - LOG_LEVEL=info - DHT_CRAWLER_SCALING_FACTOR=5 command: - worker - run - --keys=http_server - --keys=queue_server # disable the next line to run without DHT crawler - --keys=dht_crawler networks: - homenet - bitmagnetnet depends_on: postgres: condition: service_healthy redis: condition: service_healthy labels: - "traefik.enable=true" - "traefik.docker.network=homenet" - "traefik.http.routers.bitmagnet.rule=Host(`private`)" - "traefik.http.routers.bitmagnet.entrypoints=https" - "traefik.http.services.bitmagnet.loadbalancer.server.port=3333" - "traefik.http.routers.bitmagnet.tls=true" - "traefik.http.routers.bitmagnet.tls.certresolver=letsencrypt" - "traefik.http.routers.bitmagnet.middlewares=basic-auth-global@file" postgres: image: postgres:16-alpine container_name: bitmagnet-postgres shm_size: 3g volumes: - ./data/postgres:/var/lib/postgresql/data ports: - "5433:5432" restart: unless-stopped environment: - POSTGRES_PASSWORD=private - POSTGRES_DB=bitmagnet - PGUSER=postgres healthcheck: test: - CMD-SHELL - pg_isready start_period: 20s interval: 10s networks: - homenet - bitmagnetnet labels: - "traefik.enable=false" redis: image: redis:7-alpine container_name: bitmagnet-redis hostname: bitmagnet-redis volumes: - ./data/redis:/data restart: unless-stopped entrypoint: - redis-server - --save 60 1 healthcheck: test: - CMD - redis-cli - ping start_period: 20s interval: 10s networks: - bitmagnetnet labels: - "traefik.enable=false" networks: homenet: external: true bitmagnetnet: internal: true ```

Expected behavior

Works as before and doesn't blow up my PC.

Environment Information (Required)

rraymondgh commented 3 months ago

I had same issues post 0.9.0 - worth checking if 0.9.3 has same issues. I solved by adding some Postgres config options and slimming down size of database. Will add steps I took when I'm next at my laptop

mgdigital commented 2 months ago

Hi,

When I start the container, some DB operation is infinitely failing

This isn't the case - the operation has been cancelled and this isn't an error. A few people have pointed this out though and there may be something that can be done to make the logs less noisy.

CPU usage + RAM go to 100% rapidly

Please see the release notes of 0.9.0, the queue is re-indexing torrents so they work with the new ordering feature, which will use system resources while this is happening. As features are added, we sometimes need to run a maintenance task such as this to bring the database up-to-date. Of course it's ideal to keep these to a minimum, because when they are needed many users will have a big database that needs updating.

Cannot use Bitmagnet anymore at all

If you allow the queue to work down (see progress on the /metrics endpoint), usage should return to normal. If the app has become unusable, clearing the queue (by manually deleting from the queue_jobs table) should also return things to normal (but ordering won't work until this job has happened). It's worth reviewing if your system configuration and hardware is suitable for the size of database you're running. A few people had issues with 0.9.0 but most have reported it resolved in 0.9.3. While efficiency improvements can be made in the code, we can't test all the varieties of environment it will be running in. What hardware are you running Bitmagnet on, what type of disk is the database running on and roughly how many torrents do you have indexed?

mgdigital commented 2 months ago

Anyone still having issues, please let me know if v0.9.4 is any better? https://github.com/bitmagnet-io/bitmagnet/releases/tag/v0.9.4

Again please bear in mind that any unusual load will be due to queue jobs running. The health of this can be checked at the /metrics endpoint. There is work underway to expose some of these inner workings in the UI to make it less opaque as to what is going on.

k4rli commented 2 months ago

Seems good for me with the 0.9.4 image. Container starts up and stays calm

logs ```log ❯ docker compose pull [+] Pulling 6/6 ✔ redis Pulled 1.3s ✔ bitmagnet Pulled 3.2s ✔ ec99f8b99825 Already exists 0.0s ✔ d79629b50088 Pull complete 1.7s ✔ c85c06822aa0 Pull complete 1.9s ✔ postgres Pulled 1.3s ❯ docker compose up -d [+] Running 4/4 ✔ Network bitmagnet-docker_bitmagnetnet Created 0.1s ✔ Container bitmagnet-redis Healthy 6.3s ✔ Container bitmagnet-postgres Healthy 6.3s ✔ Container bitmagnet Started 6.4s ❯ docker compose logs -f bitmagnet-postgres | bitmagnet-postgres | PostgreSQL Database directory appears to contain a database; Skipping initialization bitmagnet-postgres | bitmagnet-postgres | 2024-07-05 13:17:53.910 UTC [1] LOG: starting PostgreSQL 16.3 on x86_64-pc-linux-musl, compiled by gcc (Alpine 13.2.1_git20240309) 13.2.1 20240309, 64-bit bitmagnet-postgres | 2024-07-05 13:17:53.910 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432 bitmagnet-postgres | 2024-07-05 13:17:53.910 UTC [1] LOG: listening on IPv6 address "::", port 5432 bitmagnet-postgres | 2024-07-05 13:17:53.923 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432" bitmagnet-postgres | 2024-07-05 13:17:53.938 UTC [29] LOG: database system was shut down at 2024-06-30 18:50:39 UTC bitmagnet-postgres | 2024-07-05 13:17:53.962 UTC [1] LOG: database system is ready to accept connections bitmagnet | INFO migrator migrations/migrator.go:68 checking and applying migrations... bitmagnet | INFO migrator migrations/logger.go:33 goose: no migrations to run. current version: 18 bitmagnet | INFO worker/worker.go:195 started worker {"key": "dht_crawler"} bitmagnet | INFO worker/worker.go:195 started worker {"key": "queue_server"} bitmagnet | INFO worker/worker.go:195 started worker {"key": "http_server"} bitmagnet-redis | 1:C 05 Jul 2024 13:17:53.774 # WARNING Memory overcommit must be enabled! Without it, a background save or replication may fail under low memory condition. Being disabled, it can also cause failures without low memory condition, see https://github.com/jemalloc/jemalloc/issues/1328. To fix this issue add 'vm.overcommit_memory = 1' to /etc/sysctl.conf and then reboot or run the command 'sysctl vm.overcommit_memory=1' for this to take effect. bitmagnet-redis | 1:C 05 Jul 2024 13:17:53.774 * oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo bitmagnet-redis | 1:C 05 Jul 2024 13:17:53.774 * Redis version=7.2.5, bits=64, commit=00000000, modified=0, pid=1, just started bitmagnet-redis | 1:C 05 Jul 2024 13:17:53.774 * Configuration loaded bitmagnet-redis | 1:M 05 Jul 2024 13:17:53.774 * monotonic clock: POSIX clock_gettime bitmagnet-redis | 1:M 05 Jul 2024 13:17:53.774 * Running mode=standalone, port=6379. bitmagnet-redis | 1:M 05 Jul 2024 13:17:53.775 * Server initialized bitmagnet-redis | 1:M 05 Jul 2024 13:17:53.775 * Loading RDB produced by version 7.2.5 bitmagnet-redis | 1:M 05 Jul 2024 13:17:53.775 * RDB age 412036 seconds bitmagnet-redis | 1:M 05 Jul 2024 13:17:53.775 * RDB memory usage when created 587.01 Mb bitmagnet-redis | 1:M 05 Jul 2024 13:17:55.080 * Done loading RDB, keys loaded: 115786, keys expired: 0. bitmagnet-redis | 1:M 05 Jul 2024 13:17:55.080 * DB loaded from disk: 1.305 seconds bitmagnet-redis | 1:M 05 Jul 2024 13:17:55.080 * Ready to accept connections tcp bitmagnet-postgres | 2024-07-05 13:18:31.583 UTC [27] LOG: checkpoint starting: wal bitmagnet-postgres | 2024-07-05 13:18:48.173 UTC [27] LOG: checkpoint complete: wrote 962 buffers (5.9%); 0 WAL file(s) added, 1 removed, 32 recycled; write=15.562 s, sync=0.612 s, total=16.591 s; sync files=102, longest=0.098 s, average=0.006 s; distance=530401 kB, estimate=530401 kB; lsn=4A3/FC9F29C0, redo lsn=4A3/DD0F7238 bitmagnet-postgres | 2024-07-05 13:18:48.261 UTC [27] LOG: checkpoints are occurring too frequently (17 seconds apart) bitmagnet-postgres | 2024-07-05 13:18:48.261 UTC [27] HINT: Consider increasing the configuration parameter "max_wal_size". bitmagnet-postgres | 2024-07-05 13:18:48.261 UTC [27] LOG: checkpoint starting: wal bitmagnet-postgres | 2024-07-05 13:19:05.729 UTC [27] LOG: checkpoint complete: wrote 714 buffers (4.4%); 0 WAL file(s) added, 2 removed, 31 recycled; write=16.257 s, sync=0.746 s, total=17.469 s; sync files=123, longest=0.110 s, average=0.007 s; distance=540099 kB, estimate=540099 kB; lsn=4A4/1E430528, redo lsn=4A3/FE068070 bitmagnet-postgres | 2024-07-05 13:19:05.729 UTC [27] LOG: checkpoints are occurring too frequently (17 seconds apart) bitmagnet-postgres | 2024-07-05 13:19:05.729 UTC [27] HINT: Consider increasing the configuration parameter "max_wal_size". bitmagnet-postgres | 2024-07-05 13:19:05.729 UTC [27] LOG: checkpoint starting: wal bitmagnet-postgres | 2024-07-05 13:19:15.401 UTC [27] LOG: checkpoint complete: wrote 906 buffers (5.5%); 0 WAL file(s) added, 1 removed, 32 recycled; write=8.749 s, sync=0.470 s, total=9.672 s; sync files=118, longest=0.076 s, average=0.004 s; distance=551531 kB, estimate=551531 kB; lsn=4A4/3E32E8D0, redo lsn=4A4/1FB02F00 bitmagnet-postgres | 2024-07-05 13:19:15.962 UTC [27] LOG: checkpoints are occurring too frequently (10 seconds apart) bitmagnet-postgres | 2024-07-05 13:19:15.962 UTC [27] HINT: Consider increasing the configuration parameter "max_wal_size". bitmagnet-postgres | 2024-07-05 13:19:15.962 UTC [27] LOG: checkpoint starting: wal bitmagnet-postgres | 2024-07-05 13:19:27.882 UTC [27] LOG: checkpoint complete: wrote 1469 buffers (9.0%); 0 WAL file(s) added, 2 removed, 31 recycled; write=10.060 s, sync=1.087 s, total=11.921 s; sync files=84, longest=0.226 s, average=0.013 s; distance=530037 kB, estimate=549382 kB; lsn=4A4/608DF118, redo lsn=4A4/400A04B8 bitmagnet-postgres | 2024-07-05 13:19:27.883 UTC [27] LOG: checkpoints are occurring too frequently (12 seconds apart) bitmagnet-postgres | 2024-07-05 13:19:27.883 UTC [27] HINT: Consider increasing the configuration parameter "max_wal_size". bitmagnet-postgres | 2024-07-05 13:19:27.883 UTC [27] LOG: checkpoint starting: wal bitmagnet-postgres | 2024-07-05 13:19:44.656 UTC [27] LOG: checkpoint complete: wrote 616 buffers (3.8%); 0 WAL file(s) added, 4 removed, 31 recycled; write=14.887 s, sync=1.047 s, total=16.774 s; sync files=81, longest=0.215 s, average=0.013 s; distance=577105 kB, estimate=577105 kB; lsn=4A4/83D1AE40, redo lsn=4A4/63434BC8 bitmagnet-postgres | 2024-07-05 13:19:44.657 UTC [27] LOG: checkpoints are occurring too frequently (17 seconds apart) bitmagnet-postgres | 2024-07-05 13:19:44.657 UTC [27] HINT: Consider increasing the configuration parameter "max_wal_size". bitmagnet-postgres | 2024-07-05 13:19:44.657 UTC [27] LOG: checkpoint starting: wal bitmagnet-postgres | 2024-07-05 13:19:58.144 UTC [27] LOG: checkpoint complete: wrote 734 buffers (4.5%); 0 WAL file(s) added, 4 removed, 32 recycled; write=11.676 s, sync=1.074 s, total=13.487 s; sync files=81, longest=0.264 s, average=0.014 s; distance=595304 kB, estimate=595304 kB; lsn=4A4/A653AFC0, redo lsn=4A4/8798EE50 bitmagnet-postgres | 2024-07-05 13:19:58.634 UTC [27] LOG: checkpoints are occurring too frequently (14 seconds apart) bitmagnet-postgres | 2024-07-05 13:19:58.634 UTC [27] HINT: Consider increasing the configuration parameter "max_wal_size". bitmagnet-postgres | 2024-07-05 13:19:58.634 UTC [27] LOG: checkpoint starting: wal ```
metrics ```log # HELP bitmagnet_dht_crawler_persisted_total A counter of persisted database entities. # TYPE bitmagnet_dht_crawler_persisted_total counter bitmagnet_dht_crawler_persisted_total{entity="Torrent"} 232 bitmagnet_dht_crawler_persisted_total{entity="TorrentsTorrentSource"} 2300 # HELP bitmagnet_dht_ktable_hashes_added Total number of hashes added to routing table. # TYPE bitmagnet_dht_ktable_hashes_added counter bitmagnet_dht_ktable_hashes_added 534 # HELP bitmagnet_dht_ktable_hashes_count Number of hashes in routing table. # TYPE bitmagnet_dht_ktable_hashes_count gauge bitmagnet_dht_ktable_hashes_count 534 # HELP bitmagnet_dht_ktable_hashes_dropped Total number of hashes dropped from routing table. # TYPE bitmagnet_dht_ktable_hashes_dropped counter bitmagnet_dht_ktable_hashes_dropped 0 # HELP bitmagnet_dht_ktable_nodes_added Total number of nodes added to routing table. # TYPE bitmagnet_dht_ktable_nodes_added counter bitmagnet_dht_ktable_nodes_added 794 # HELP bitmagnet_dht_ktable_nodes_count Number of nodes in routing table. # TYPE bitmagnet_dht_ktable_nodes_count gauge bitmagnet_dht_ktable_nodes_count 341 # HELP bitmagnet_dht_ktable_nodes_dropped Total number of nodes dropped from routing table. # TYPE bitmagnet_dht_ktable_nodes_dropped counter bitmagnet_dht_ktable_nodes_dropped 453 # HELP bitmagnet_dht_responder_query_concurrency Number of concurrent DHT queries. # TYPE bitmagnet_dht_responder_query_concurrency gauge bitmagnet_dht_responder_query_concurrency{query="find_node"} 0 bitmagnet_dht_responder_query_concurrency{query="get_peers"} 0 bitmagnet_dht_responder_query_concurrency{query="ping"} 0 # HELP bitmagnet_dht_responder_query_duration_seconds A histogram of successful DHT query durations in seconds. # TYPE bitmagnet_dht_responder_query_duration_seconds histogram bitmagnet_dht_responder_query_duration_seconds_bucket{query="find_node",le="0.1"} 6 bitmagnet_dht_responder_query_duration_seconds_bucket{query="find_node",le="0.15000000000000002"} 6 bitmagnet_dht_responder_query_duration_seconds_bucket{query="find_node",le="0.22500000000000003"} 6 bitmagnet_dht_responder_query_duration_seconds_bucket{query="find_node",le="0.3375"} 6 bitmagnet_dht_responder_query_duration_seconds_bucket{query="find_node",le="0.5062500000000001"} 6 bitmagnet_dht_responder_query_duration_seconds_bucket{query="find_node",le="+Inf"} 6 bitmagnet_dht_responder_query_duration_seconds_sum{query="find_node"} 0.000176673 bitmagnet_dht_responder_query_duration_seconds_count{query="find_node"} 6 bitmagnet_dht_responder_query_duration_seconds_bucket{query="get_peers",le="0.1"} 36 bitmagnet_dht_responder_query_duration_seconds_bucket{query="get_peers",le="0.15000000000000002"} 36 bitmagnet_dht_responder_query_duration_seconds_bucket{query="get_peers",le="0.22500000000000003"} 36 bitmagnet_dht_responder_query_duration_seconds_bucket{query="get_peers",le="0.3375"} 36 bitmagnet_dht_responder_query_duration_seconds_bucket{query="get_peers",le="0.5062500000000001"} 36 bitmagnet_dht_responder_query_duration_seconds_bucket{query="get_peers",le="+Inf"} 36 bitmagnet_dht_responder_query_duration_seconds_sum{query="get_peers"} 0.0014087019999999997 bitmagnet_dht_responder_query_duration_seconds_count{query="get_peers"} 36 bitmagnet_dht_responder_query_duration_seconds_bucket{query="ping",le="0.1"} 5 bitmagnet_dht_responder_query_duration_seconds_bucket{query="ping",le="0.15000000000000002"} 5 bitmagnet_dht_responder_query_duration_seconds_bucket{query="ping",le="0.22500000000000003"} 5 bitmagnet_dht_responder_query_duration_seconds_bucket{query="ping",le="0.3375"} 5 bitmagnet_dht_responder_query_duration_seconds_bucket{query="ping",le="0.5062500000000001"} 5 bitmagnet_dht_responder_query_duration_seconds_bucket{query="ping",le="+Inf"} 5 bitmagnet_dht_responder_query_duration_seconds_sum{query="ping"} 2.9704000000000002e-05 bitmagnet_dht_responder_query_duration_seconds_count{query="ping"} 5 # HELP bitmagnet_dht_responder_query_success_total A counter of successful DHT queries. # TYPE bitmagnet_dht_responder_query_success_total counter bitmagnet_dht_responder_query_success_total{query="find_node"} 6 bitmagnet_dht_responder_query_success_total{query="get_peers"} 36 bitmagnet_dht_responder_query_success_total{query="ping"} 5 # HELP bitmagnet_dht_server_query_concurrency Number of concurrent DHT queries. # TYPE bitmagnet_dht_server_query_concurrency gauge bitmagnet_dht_server_query_concurrency{query="find_node"} 1 bitmagnet_dht_server_query_concurrency{query="get_peers"} 33 bitmagnet_dht_server_query_concurrency{query="ping"} 5 bitmagnet_dht_server_query_concurrency{query="sample_infohashes"} 25 # HELP bitmagnet_dht_server_query_duration_seconds A histogram of successful DHT query durations in seconds. # TYPE bitmagnet_dht_server_query_duration_seconds histogram bitmagnet_dht_server_query_duration_seconds_bucket{query="find_node",le="0.1"} 412 bitmagnet_dht_server_query_duration_seconds_bucket{query="find_node",le="0.15000000000000002"} 450 bitmagnet_dht_server_query_duration_seconds_bucket{query="find_node",le="0.22500000000000003"} 482 bitmagnet_dht_server_query_duration_seconds_bucket{query="find_node",le="0.3375"} 583 bitmagnet_dht_server_query_duration_seconds_bucket{query="find_node",le="0.5062500000000001"} 598 bitmagnet_dht_server_query_duration_seconds_bucket{query="find_node",le="+Inf"} 601 bitmagnet_dht_server_query_duration_seconds_sum{query="find_node"} 69.93008688100004 bitmagnet_dht_server_query_duration_seconds_count{query="find_node"} 601 bitmagnet_dht_server_query_duration_seconds_bucket{query="get_peers",le="0.1"} 8296 bitmagnet_dht_server_query_duration_seconds_bucket{query="get_peers",le="0.15000000000000002"} 9897 bitmagnet_dht_server_query_duration_seconds_bucket{query="get_peers",le="0.22500000000000003"} 11451 bitmagnet_dht_server_query_duration_seconds_bucket{query="get_peers",le="0.3375"} 13351 bitmagnet_dht_server_query_duration_seconds_bucket{query="get_peers",le="0.5062500000000001"} 13791 bitmagnet_dht_server_query_duration_seconds_bucket{query="get_peers",le="+Inf"} 13897 bitmagnet_dht_server_query_duration_seconds_sum{query="get_peers"} 1685.2416921299957 bitmagnet_dht_server_query_duration_seconds_count{query="get_peers"} 13897 bitmagnet_dht_server_query_duration_seconds_bucket{query="ping",le="0.1"} 729 bitmagnet_dht_server_query_duration_seconds_bucket{query="ping",le="0.15000000000000002"} 851 bitmagnet_dht_server_query_duration_seconds_bucket{query="ping",le="0.22500000000000003"} 950 bitmagnet_dht_server_query_duration_seconds_bucket{query="ping",le="0.3375"} 1150 bitmagnet_dht_server_query_duration_seconds_bucket{query="ping",le="0.5062500000000001"} 1214 bitmagnet_dht_server_query_duration_seconds_bucket{query="ping",le="+Inf"} 1224 bitmagnet_dht_server_query_duration_seconds_sum{query="ping"} 164.1278875520001 bitmagnet_dht_server_query_duration_seconds_count{query="ping"} 1224 bitmagnet_dht_server_query_duration_seconds_bucket{query="sample_infohashes",le="0.1"} 1318 bitmagnet_dht_server_query_duration_seconds_bucket{query="sample_infohashes",le="0.15000000000000002"} 1543 bitmagnet_dht_server_query_duration_seconds_bucket{query="sample_infohashes",le="0.22500000000000003"} 1729 bitmagnet_dht_server_query_duration_seconds_bucket{query="sample_infohashes",le="0.3375"} 2176 bitmagnet_dht_server_query_duration_seconds_bucket{query="sample_infohashes",le="0.5062500000000001"} 2290 bitmagnet_dht_server_query_duration_seconds_bucket{query="sample_infohashes",le="+Inf"} 2310 bitmagnet_dht_server_query_duration_seconds_sum{query="sample_infohashes"} 317.33007929199925 bitmagnet_dht_server_query_duration_seconds_count{query="sample_infohashes"} 2310 # HELP bitmagnet_dht_server_query_error_total A counter of failed DHT queries. # TYPE bitmagnet_dht_server_query_error_total counter bitmagnet_dht_server_query_error_total{query="find_node"} 70 bitmagnet_dht_server_query_error_total{query="get_peers"} 417 bitmagnet_dht_server_query_error_total{query="ping"} 223 bitmagnet_dht_server_query_error_total{query="sample_infohashes"} 1727 # HELP bitmagnet_dht_server_query_success_total A counter of successful DHT queries. # TYPE bitmagnet_dht_server_query_success_total counter bitmagnet_dht_server_query_success_total{query="find_node"} 601 bitmagnet_dht_server_query_success_total{query="get_peers"} 13897 bitmagnet_dht_server_query_success_total{query="ping"} 1224 bitmagnet_dht_server_query_success_total{query="sample_infohashes"} 2310 # HELP bitmagnet_meta_info_requester_concurrency Number of concurrent meta info requests. # TYPE bitmagnet_meta_info_requester_concurrency gauge bitmagnet_meta_info_requester_concurrency 92 # HELP bitmagnet_meta_info_requester_duration_seconds Duration of successful meta info requests in seconds. # TYPE bitmagnet_meta_info_requester_duration_seconds histogram bitmagnet_meta_info_requester_duration_seconds_bucket{le="0.005"} 0 bitmagnet_meta_info_requester_duration_seconds_bucket{le="0.01"} 0 bitmagnet_meta_info_requester_duration_seconds_bucket{le="0.025"} 0 bitmagnet_meta_info_requester_duration_seconds_bucket{le="0.05"} 0 bitmagnet_meta_info_requester_duration_seconds_bucket{le="0.1"} 4 bitmagnet_meta_info_requester_duration_seconds_bucket{le="0.25"} 36 bitmagnet_meta_info_requester_duration_seconds_bucket{le="0.5"} 84 bitmagnet_meta_info_requester_duration_seconds_bucket{le="1"} 138 bitmagnet_meta_info_requester_duration_seconds_bucket{le="2.5"} 227 bitmagnet_meta_info_requester_duration_seconds_bucket{le="5"} 270 bitmagnet_meta_info_requester_duration_seconds_bucket{le="10"} 275 bitmagnet_meta_info_requester_duration_seconds_bucket{le="+Inf"} 275 bitmagnet_meta_info_requester_duration_seconds_sum 381.2654152300004 bitmagnet_meta_info_requester_duration_seconds_count 275 # HELP bitmagnet_meta_info_requester_error_total Total number of failed meta info requests. # TYPE bitmagnet_meta_info_requester_error_total counter bitmagnet_meta_info_requester_error_total 6756 # HELP bitmagnet_meta_info_requester_success_total Total number of successful meta info requests. # TYPE bitmagnet_meta_info_requester_success_total counter bitmagnet_meta_info_requester_success_total 275 # HELP bitmagnet_process_cpu_seconds_total Total user and system CPU time spent in seconds. # TYPE bitmagnet_process_cpu_seconds_total counter bitmagnet_process_cpu_seconds_total 32.28 # HELP bitmagnet_process_max_fds Maximum number of open file descriptors. # TYPE bitmagnet_process_max_fds gauge bitmagnet_process_max_fds 1.048576e+06 # HELP bitmagnet_process_open_fds Number of open file descriptors. # TYPE bitmagnet_process_open_fds gauge bitmagnet_process_open_fds 107 # HELP bitmagnet_process_resident_memory_bytes Resident memory size in bytes. # TYPE bitmagnet_process_resident_memory_bytes gauge bitmagnet_process_resident_memory_bytes 1.57478912e+08 # HELP bitmagnet_process_start_time_seconds Start time of the process since unix epoch in seconds. # TYPE bitmagnet_process_start_time_seconds gauge bitmagnet_process_start_time_seconds 1.7201854787e+09 # HELP bitmagnet_process_virtual_memory_bytes Virtual memory size in bytes. # TYPE bitmagnet_process_virtual_memory_bytes gauge bitmagnet_process_virtual_memory_bytes 1.511141376e+09 # HELP bitmagnet_process_virtual_memory_max_bytes Maximum amount of virtual memory available in bytes. # TYPE bitmagnet_process_virtual_memory_max_bytes gauge bitmagnet_process_virtual_memory_max_bytes 1.8446744073709552e+19 # HELP bitmagnet_queue_jobs_total Number of tasks enqueued; broken down by queue and status. # TYPE bitmagnet_queue_jobs_total gauge bitmagnet_queue_jobs_total{queue="process_torrent",status="pending"} 134594 bitmagnet_queue_jobs_total{queue="process_torrent",status="processed"} 504 # HELP go_gc_duration_seconds A summary of the pause duration of garbage collection cycles. # TYPE go_gc_duration_seconds summary go_gc_duration_seconds{quantile="0"} 2.3695e-05 go_gc_duration_seconds{quantile="0.25"} 3.1209e-05 go_gc_duration_seconds{quantile="0.5"} 3.3362e-05 go_gc_duration_seconds{quantile="0.75"} 3.694e-05 go_gc_duration_seconds{quantile="1"} 9.0009e-05 go_gc_duration_seconds_sum 0.009992686 go_gc_duration_seconds_count 282 # HELP go_goroutines Number of goroutines that currently exist. # TYPE go_goroutines gauge go_goroutines 485 # HELP go_info Information about the Go environment. # TYPE go_info gauge go_info{version="go1.22.5"} 1 # HELP go_memstats_alloc_bytes Number of bytes allocated and still in use. # TYPE go_memstats_alloc_bytes gauge go_memstats_alloc_bytes 5.293368e+07 # HELP go_memstats_alloc_bytes_total Total number of bytes allocated, even if freed. # TYPE go_memstats_alloc_bytes_total counter go_memstats_alloc_bytes_total 1.2129796176e+10 # HELP go_memstats_buck_hash_sys_bytes Number of bytes used by the profiling bucket hash table. # TYPE go_memstats_buck_hash_sys_bytes gauge go_memstats_buck_hash_sys_bytes 2.200703e+06 # HELP go_memstats_frees_total Total number of frees. # TYPE go_memstats_frees_total counter go_memstats_frees_total 1.69710642e+08 # HELP go_memstats_gc_sys_bytes Number of bytes used for garbage collection system metadata. # TYPE go_memstats_gc_sys_bytes gauge go_memstats_gc_sys_bytes 5.543032e+06 # HELP go_memstats_heap_alloc_bytes Number of heap bytes allocated and still in use. # TYPE go_memstats_heap_alloc_bytes gauge go_memstats_heap_alloc_bytes 5.293368e+07 # HELP go_memstats_heap_idle_bytes Number of heap bytes waiting to be used. # TYPE go_memstats_heap_idle_bytes gauge go_memstats_heap_idle_bytes 1.63422208e+08 # HELP go_memstats_heap_inuse_bytes Number of heap bytes that are in use. # TYPE go_memstats_heap_inuse_bytes gauge go_memstats_heap_inuse_bytes 7.6341248e+07 # HELP go_memstats_heap_objects Number of allocated objects. # TYPE go_memstats_heap_objects gauge go_memstats_heap_objects 151903 # HELP go_memstats_heap_released_bytes Number of heap bytes released to OS. # TYPE go_memstats_heap_released_bytes gauge go_memstats_heap_released_bytes 1.32636672e+08 # HELP go_memstats_heap_sys_bytes Number of heap bytes obtained from system. # TYPE go_memstats_heap_sys_bytes gauge go_memstats_heap_sys_bytes 2.39763456e+08 # HELP go_memstats_last_gc_time_seconds Number of seconds since 1970 of last garbage collection. # TYPE go_memstats_last_gc_time_seconds gauge go_memstats_last_gc_time_seconds 1.7201856960816712e+09 # HELP go_memstats_lookups_total Total number of pointer lookups. # TYPE go_memstats_lookups_total counter go_memstats_lookups_total 0 # HELP go_memstats_mallocs_total Total number of mallocs. # TYPE go_memstats_mallocs_total counter go_memstats_mallocs_total 1.69862545e+08 # HELP go_memstats_mcache_inuse_bytes Number of bytes in use by mcache structures. # TYPE go_memstats_mcache_inuse_bytes gauge go_memstats_mcache_inuse_bytes 28800 # HELP go_memstats_mcache_sys_bytes Number of bytes used for mcache structures obtained from system. # TYPE go_memstats_mcache_sys_bytes gauge go_memstats_mcache_sys_bytes 31200 # HELP go_memstats_mspan_inuse_bytes Number of bytes in use by mspan structures. # TYPE go_memstats_mspan_inuse_bytes gauge go_memstats_mspan_inuse_bytes 998880 # HELP go_memstats_mspan_sys_bytes Number of bytes used for mspan structures obtained from system. # TYPE go_memstats_mspan_sys_bytes gauge go_memstats_mspan_sys_bytes 2.33376e+06 # HELP go_memstats_next_gc_bytes Number of heap bytes when next garbage collection will take place. # TYPE go_memstats_next_gc_bytes gauge go_memstats_next_gc_bytes 1.07783872e+08 # HELP go_memstats_other_sys_bytes Number of bytes used for other system allocations. # TYPE go_memstats_other_sys_bytes gauge go_memstats_other_sys_bytes 5.634593e+06 # HELP go_memstats_stack_inuse_bytes Number of bytes in use by the stack allocator. # TYPE go_memstats_stack_inuse_bytes gauge go_memstats_stack_inuse_bytes 1.1862016e+07 # HELP go_memstats_stack_sys_bytes Number of bytes obtained from system for stack allocator. # TYPE go_memstats_stack_sys_bytes gauge go_memstats_stack_sys_bytes 1.1862016e+07 # HELP go_memstats_sys_bytes Number of bytes obtained from system. # TYPE go_memstats_sys_bytes gauge go_memstats_sys_bytes 2.6736876e+08 # HELP go_threads Number of OS threads created. # TYPE go_threads gauge go_threads 31 ```

Maybe it is doing the maintenance or migrations currently. I'll let it run for a while. For now Postgres was still at 60% CPU, just without the continuous logs.

k4rli commented 1 month ago

Seems good to me now since 0.9.4. CPU usage is 1-2% at idle with a 7900x and 15M crawled entries.