Closed sumya-stablelab closed 7 months ago
索引器运行没有任何问题 0 到 8522889 个检查点然后突然发布错误并反复循环
Apr 02 08:05:16 sui sui-indexer[1251320]: 2024-04-02T08:05:16.199308Z ERROR sui_indexer::store::pg_indexer_store: Error with persisting data into DB: DatabaseError(Unknown, "canceling statement due to statement timeout") Apr 02 08:05:16 sui sui-indexer[1251320]: 2024-04-02T08:05:16.817278Z ERROR commit_checkpoints{first=8522890 last=8522890}: sui_indexer::store::pg_partition_manager: Error with persisting data into DB: DatabaseError(Unknown, "canceling statement due to statement timeout") Apr 02 08:05:16 sui sui-indexer[1251320]: 2024-04-02T08:05:16.817400Z ERROR commit_checkpoints{first=8522890 last=8522890}: sui_indexer::handlers::committer: Failed to advance epoch with error: Indexer failed to commit changes to PostgresDB with error: `canceling statement due to statement timeout` Apr 02 08:05:16 sui sui-indexer[1251320]: 2024-04-02T08:05:16.817416Z ERROR commit_checkpoints{first=8522890 last=8522890}: telemetry_subscribers: panicked at /root/sui/crates/sui-indexer/src/handlers/committer.rs:165:14: Apr 02 08:05:16 sui sui-indexer[1251320]: Advancing epochs in DB should not fail.: PostgresWriteError("canceling statement due to statement timeout") panic.file="/root/sui/crates/sui-indexer/src/handlers/committer.rs" panic.line=165 panic.column=14 Apr 02 08:05:16 sui sui-indexer[1251320]: thread 'tokio-runtime-worker' panicked at /root/sui/crates/sui-indexer/src/handlers/committer.rs:165:14: Apr 02 08:05:16 sui sui-indexer[1251320]: Advancing epochs in DB should not fail.: PostgresWriteError("canceling statement due to statement timeout") Apr 02 08:05:16 sui sui-indexer[1251320]: note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
并且仍然有 1.2T 磁盘空间 postgres 配置是默认的
Indexer running without any problem 0 to 8522889 checkpoints then suddenly throws error and loops over and over
Apr 02 08:05:16 sui sui-indexer[1251320]: 2024-04-02T08:05:16.199308Z ERROR sui_indexer::store::pg_indexer_store: Error with persisting data into DB: DatabaseError(Unknown, "canceling statement due to statement timeout") Apr 02 08:05:16 sui sui-indexer[1251320]: 2024-04-02T08:05:16.817278Z ERROR commit_checkpoints{first=8522890 last=8522890}: sui_indexer::store::pg_partition_manager: Error with persisting data into DB: DatabaseError(Unknown, "canceling statement due to statement timeout") Apr 02 08:05:16 sui sui-indexer[1251320]: 2024-04-02T08:05:16.817400Z ERROR commit_checkpoints{first=8522890 last=8522890}: sui_indexer::handlers::committer: Failed to advance epoch with error: Indexer failed to commit changes to PostgresDB with error: `canceling statement due to statement timeout` Apr 02 08:05:16 sui sui-indexer[1251320]: 2024-04-02T08:05:16.817416Z ERROR commit_checkpoints{first=8522890 last=8522890}: telemetry_subscribers: panicked at /root/sui/crates/sui-indexer/src/handlers/committer.rs:165:14: Apr 02 08:05:16 sui sui-indexer[1251320]: Advancing epochs in DB should not fail.: PostgresWriteError("canceling statement due to statement timeout") panic.file="/root/sui/crates/sui-indexer/src/handlers/committer.rs" panic.line=165 panic.column=14 Apr 02 08:05:16 sui sui-indexer[1251320]: thread 'tokio-runtime-worker' panicked at /root/sui/crates/sui-indexer/src/handlers/committer.rs:165:14: Apr 02 08:05:16 sui sui-indexer[1251320]: Advancing epochs in DB should not fail.: PostgresWriteError("canceling statement due to statement timeout") Apr 02 08:05:16 sui sui-indexer[1251320]: note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
and still have 1.2T disk space postgres configuration is default
Hello, I have the same problem, have you solved it?
Hi this is due to statement timeout, you can override the env vars to
DB_CONNECTION_TIMEOUT 3600
DB_STATEMENT_TIMEOUT 3600
also these have been updated to 3600 to the latest main, so updating the binary to latest main should resolve that too.
closing this issue now and feel free to re-open it if it's not resolved
Indexer running without any problem 0 to 8522889 checkpoints then suddenly throws error and loops over and over
and still have 1.2T disk space postgres configuration is default