toncenter / ton-indexer

GNU General Public License v3.0
63 stars 29 forks source link

TON archive node indexer query timeout error #93

Open vataops opened 1 day ago

vataops commented 1 day ago

Postgres DB server spec

We are currently running two servers: one for a TON archive node and another dedicated to indexing. The index server hosts PostgreSQL and an API, where we use workers to gather chaindata from the archive node.

The issue we’re facing is that the data has grown to almost 1TB, and the query time for the /api/v3/masterchainInfo method is extremely long. Even though we’ve set the timeout to 50 seconds, sometimes it just barely manages to respond after 40 seconds, but now it’s not responding within the timeout at all.

There doesn’t seem to be any connection overload. Even after removing Redis, the same error occurs.

Our DB server isn’t underpowered, so I’m not sure why the DB queries are almost stalling. Could you help us figure out what’s causing this issue?

We’re running it with Docker, and we’ve increased resource usage with the following configuration.

postgres -c max_connections=200 -c shared_buffers=40GB -c work_mem=2048MB -c max_wal_size=1GB -c maintenance_work_mem=10GB -c effective_cache_size=60GB -c wal_buffers=16MB -c tcp_keepalives_idle=60 -c tcp_keepalives_interval=60 -c tcp_keepalives_count=60
curl -X 'GET' \
  'http://{ip}:8081/api/v3/masterchainInfo?x_timeout=30000' \
  -H 'accept: application/json'
{"last":{"workchain":-1,"shard":"8000000000000000","seqno":34602059,"root_hash":"26T0gL1pU1lZNJ2y6uBPy+N+jV4gMCF4FmWIw8e2zMA=","file_hash":"zaztgvWfQrJLVA4PmkM5YSpZMwy95LHKHsWE2bsfNHU=","global_id":-239,"version":0,"after_merge":false,"before_split":false,"after_split":false,"want_merge":true,"want_split":false,"key_block":false,"vert_seqno_incr":false,"flags":1,"gen_utime":"1701767425","start_lt":"43044529000000","end_lt":"43044529000004","validator_list_hash_short":279988250,"gen_catchain_seqno":508027,"min_ref_mc_seqno":34602055,"prev_key_block_seqno":34591413,"vert_seqno":1,"master_ref_seqno":0,"rand_seed":"VvTz327T3cG8uzSnC0mWYFwN0uaqsFadwjhwikcJXHQ=","created_by":"g068PJk9csmaVnXWewLZ71WKWH8vJAoxqlrM9MreVGM=","tx_count":3,"masterchain_block_ref":{"workchain":-1,"shard":"8000000000000000","seqno":34602059},"prev_blocks":[{"workchain":-1,"shard":"8000000000000000","seqno":34602058}]},"first":{"workchain":-1,"shard":"8000000000000000","seqno":2,"root_hash":"4bzgnFItQjTVEMYL9c/VHshMJttG9gDIXCzsMQdjKSU=","file_hash":"2gOSTo8fuMWgA18snVD1RUtTfpU5LvCQWOOQ16Z7w5Y=","global_id":-239,"version":0,"after_merge":false,"before_split":false,"after_split":false,"want_merge":false,"want_split":false,"key_block":false,"vert_seqno_incr":false,"flags":0,"gen_utime":"1573822391","start_lt":"2000000","end_lt":"2000012","validator_list_hash_short":617944277,"gen_catchain_seqno":1,"min_ref_mc_seqno":2,"prev_key_block_seqno":0,"vert_seqno":0,"master_ref_seqno":0,"rand_seed":"/KR6fn9jrnzUyz5trC9buSH4iaWdCMpvYtzPaPR79mI=","created_by":"6MC3PyuC80JUx8L+cNE4TnXEZ1i3TeCKxYFV5ZP9eKw=","tx_count":11,"masterchain_block_ref":{"workchain":-1,"shard":"8000000000000000","seqno":2},"prev_blocks":[{"workchain":-1,"shard":"8000000000000000","seqno":1}]}}
hsmang commented 1 day ago

@vataops 안녕하세요. !! 저도 같은 상황을 겪었습니다. 전 개인적으로 우선 DB 인덱스를 추가했습니다. 한번 해보실래요 ?? 물론 이게 답은 아닐겁니다...

 CREATE INDEX CONCURRENTLY mc_block_seq_index ON transactions (mc_block_seqno);

CREATE INDEX CONCURRENTLY messages_index_1 ON messages (msg_hash, direction);

CREATE INDEX CONCURRENTLY latest_account_states_index ON latest_account_states (account);

같은 한국분이라 반갑습니다 ㅠㅠ

vataops commented 17 hours ago

@hsmang 한국분이신가요ㅎㅎ 고생이 많으십니다 ㅠㅠ 감사합니다!

hsmang commented 17 hours ago

@vataops 네.. ㅋㅋㅋ 아마 인덱스 추가하시면 속도는 바로 해결되실거에요 ! 하지만 인덱스를 안건 이유가 있을 것 같기도해서 조심스럽습니다..