X-lab2017 / open-digger

Open source analysis tools
https://open-digger.cn
Apache License 2.0
281 stars 79 forks source link

[Bug] The ClickHouse database memory limit exceeds. #1449

Closed lhbvvvvv closed 7 months ago

lhbvvvvv commented 7 months ago

Current Behavior

The following error occurs when I query the top 10 openrank repos after connecting to clickhouse database. There seems to be an issue with the maximum memory limit set by the system when executing the query.

Error: Memory limit (for user) exceeded: would use 89.61 GiB (attempt to allocate chunk of 4744104 bytes), maximum: 89.60 GiB. OvercommitTracker decision: Query was selected to stop by OvercommitTracker.: (while reading column issue_created_at): (while reading from part [/clickhouse/data/data/store/361/361b8fb8-f6c8-47b8-ae6a-a1ad133957ba/202006_47695_47898_3_160142/](https://vscode-remote+ssh-002dremote-002b139-002e196-002e193-002e73.vscode-resource.vscode-cdn.net/clickhouse/data/data/store/361/361b8fb8-f6c8-47b8-ae6a-a1ad133957ba/202006_47695_47898_3_160142/) from mark 742 with max_rows_to_read = 23995): While executing MergeTreeThread. 
    at parseError (/home/node/notebook/node_modules/@clickhouse/client/dist/error/parse_error.js:32:16)
    at ClientRequest.onResponse (/home/node/notebook/node_modules/@clickhouse/client/dist/connection/adapter/base_http_adapter.js:130:51)
    at processTicksAndRejections (node:internal/process/task_queues:96:5)Error: Memory limit (for user) exceeded: would use 89.61 GiB (attempt to allocate chunk of 4201097 bytes), maximum: 89.60 GiB. OvercommitTracker decision: Query was selected to stop by OvercommitTracker.: (while reading column repo_name): (while reading from part [/clickhouse/data/data/store/361/361b8fb8-f6c8-47b8-ae6a-a1ad133957ba/202206_55280_146727_124_160142/](https://vscode-remote+ssh-002dremote-002b139-002e196-002e193-002e73.vscode-resource.vscode-cdn.net/clickhouse/data/data/store/361/361b8fb8-f6c8-47b8-ae6a-a1ad133957ba/202206_55280_146727_124_160142/) from mark 3950 with max_rows_to_read = 30473): While executing MergeTreeThread. 
    at parseError (/home/node/notebook/node_modules/@clickhouse/client/dist/error/parse_error.js:32:16)
    at ClientRequest.onResponse (/home/node/notebook/node_modules/@clickhouse/client/dist/connection/adapter/base_http_adapter.js:130:51)
    at runMicrotasks (<anonymous>)
    at processTicksAndRejections (node:internal/process/task_queues:96:5)Error: Memory limit (for user) exceeded: would use 89.62 GiB (attempt to allocate chunk of 5898240 bytes), maximum: 89.60 GiB. OvercommitTracker decision: Query was selected to stop by OvercommitTracker.: While executing AggregatingTransform. 
    at parseError (/home/node/notebook/node_modules/@clickhouse/client/dist/error/parse_error.js:32:16)
    at ClientRequest.onResponse (/home/node/notebook/node_modules/@clickhouse/client/dist/connection/adapter/base_http_adapter.js:130:51)
    at runMicrotasks (<anonymous>)
    at processTicksAndRejections (node:internal/process/task_queues:96:5)Error: Memory limit (for user) exceeded: would use 89.74 GiB (attempt to allocate chunk of 4194344 bytes), maximum: 89.60 GiB. OvercommitTracker decision: Query was selected to stop by OvercommitTracker.: While executing AggregatingTransform. 
    at parseError (/home/node/notebook/node_modules/@clickhouse/client/dist/error/parse_error.js:32:16)
    at ClientRequest.onResponse (/home/node/notebook/node_modules/@clickhouse/client/dist/connection/adapter/base_http_adapter.js:130:51)
    at processTicksAndRejections (node:internal/process/task_queues:96:5)

Expected Behavior

No response

Any Additional Comments?

No response

open-digger-bot[bot] commented 7 months ago

This issue has not been replied for 24 hours, please pay attention to this issue: @gymgym1212 @xiaoya-yaya @xgdyp

frank-zsy commented 7 months ago

This exception occurs only when calculating duration related metrics for all repos in the table which is really a time consuming process. This is not really a bug since the memory for our ClickHouse instance is only 128GB, so we should avoid these kind of memory consuming queries.