ordinals / ord

👁‍🗨 Rare and exotic sats
https://ordinals.com
Creative Commons Zero v1.0 Universal
3.84k stars 1.37k forks source link

Ord 0.21.0 very slow to index #3999

Open gmart7t2 opened 3 days ago

gmart7t2 commented 3 days ago

I am building a pair of ord 0.21.0 indexes:

$ ps -wwwef | grep ord21 | grep -v grep
gm       3114880 3353832 92 Oct12 pts/12   2-07:01:50 /usr/local/bin/ord21 --height-limit 870001 --index-runes --index /home/gm/.local/share/ord/index-0.21-without.redb server --http-port 51021 --csp-origin https://without21.ordstuff.info --decompress
gm       3115151 3353612 90 Oct12 pts/11   2-06:17:16 /usr/local/bin/ord21 --height-limit 870001 --index-runes --index /home/gm/.local/share/ord/index-0.21-with.redb --index-sats --index-addresses server --http-port 50021 --csp-origin https://with21.ordstuff.info --decompress
$ tail -f ord21[yn].log
==> ord21n.log <==
[2024-10-14T22:30:16Z INFO  ord::index::updater] Wrote 0 sat ranges from 0 outputs in 267 ms
[2024-10-14T22:30:16Z INFO  ord::index::updater] Block 814996 at 2023-11-02 17:55:57 UTC with 3176 transactions…
[2024-10-14T22:30:17Z INFO  ord::index::updater] Wrote 0 sat ranges from 0 outputs in 788 ms
[2024-10-14T22:30:17Z INFO  ord::index::updater] Block 814997 at 2023-11-02 17:57:34 UTC with 1968 transactions…
[2024-10-14T22:30:18Z INFO  ord::index::updater] Wrote 0 sat ranges from 0 outputs in 1378 ms
[2024-10-14T22:30:18Z INFO  ord::index::updater] Block 814998 at 2023-11-02 17:57:44 UTC with 1339 transactions…
[2024-10-14T22:30:19Z INFO  ord::index::updater] Wrote 0 sat ranges from 0 outputs in 389 ms
[2024-10-14T22:30:19Z INFO  ord::index::updater] Block 814999 at 2023-11-02 18:13:12 UTC with 3156 transactions…
[2024-10-14T22:30:19Z INFO  ord::index::updater] Wrote 0 sat ranges from 0 outputs in 196 ms
[2024-10-14T22:30:19Z INFO  ord::index::updater] Committing at block height 815000, 0 outputs traversed, 8166981 in map, 254420874 cached

==> ord21y.log <==
[2024-10-15T12:26:40Z INFO  ord::index::updater] Wrote 195672 sat ranges from 1657 outputs in 17 ms
[2024-10-15T12:26:40Z INFO  ord::index::updater] Block 429996 at 2016-09-15 22:30:07 UTC with 35 transactions…
[2024-10-15T12:26:40Z INFO  ord::index::updater] Wrote 16290 sat ranges from 81 outputs in 1 ms
[2024-10-15T12:26:40Z INFO  ord::index::updater] Block 429997 at 2016-09-15 22:37:13 UTC with 972 transactions…
[2024-10-15T12:26:40Z INFO  ord::index::updater] Wrote 241416 sat ranges from 2039 outputs in 31 ms
[2024-10-15T12:26:40Z INFO  ord::index::updater] Block 429998 at 2016-09-15 23:19:45 UTC with 1485 transactions…
[2024-10-15T12:26:40Z INFO  ord::index::updater] Wrote 1759050 sat ranges from 3228 outputs in 90 ms
[2024-10-15T12:26:41Z INFO  ord::index::updater] Block 429999 at 2016-09-15 23:21:13 UTC with 609 transactions…
[2024-10-15T12:26:41Z INFO  ord::index::updater] Wrote 323202 sat ranges from 1190 outputs in 138 ms
[2024-10-15T12:26:41Z INFO  ord::index::updater] Committing at block height 430000, 18419984 outputs traversed, 3920730 in map, 343196804 cached

Note the 'no sats' index started doing a commit at 10:30pm yesterday and now it's 2:40pm. That's over 16 hours for a commit. I've not seen a commit take more than 2 hours before.

The index files are both currently 25G:

$ ls -lh index-0.21-with*
-rw-r--r-- 1 gm gm 25G Oct 15 07:49 index-0.21-with.redb
-rw-r--r-- 1 gm gm 25G Oct 15 07:49 index-0.21-without.redb
$ df .
Filesystem      Size  Used Avail Use% Mounted on
/dev/sda1       1.8T  1.3T  570G  69% /home
gmart7t2 commented 3 days ago

This is likely caused by the fractal ord server I am running on the same server. Fractal sees frequent reorgs which triggers a lot of disk I/O to restore from the snapshot.

I'll shut down the fractal ord server and see if that fixes the problem.

so7ow commented 3 days ago

I was also running a couple scratch indexes on 0.21.0. (1 was sats/runes/addresses and the other was runes/addresses.) After running overnight, this morning both of them were stopped while committing at block 350000. I watched them both stay there for a couple hours, then pressed ctrl-c and watched them attempt to stop for a couple hours before killing them. Seems like something's up here!

so7ow commented 3 days ago

This is likely caused by the fractal ord server I am running on the same server.

No fractal running on mine, although I did just update Bitcoin core to v28.

somethingbeta commented 3 days ago

I got past 350000 by setting commit-interval to 10. currently near 412000. This is an index build from scratch - with-address/runes windows machine commit interval 10 .. bitcoin 26.1

so7ow commented 3 days ago

I enabled logging and restarted from scratch again, with the default commit interval. I'm in the 300000s now and it's starting to get pretty slow committing to disk.

My sats/runes/addresses run has been working on committing at 305000 for over 30 mins now.

so7ow commented 2 days ago

I got past 350000 by setting commit-interval to 10. currently near 412000. This is an index build from scratch - with-address/runes windows machine commit interval 10 .. bitcoin 26.1

I restarted from scratch again, this time with --commit-interval 20 and I've gotten farther in a couple hours than I got overnight with the default setting. Something sure seems off here!

cryptoni9n commented 2 days ago

this may be related to #3804

so7ow commented 2 days ago

this may be related to #3804

Perhaps it's related but it's new set of symptoms, for me anyway. My system that has been able to crank out a runes/addresses index in 12 hours on 0.20.0 with default 5000 commit interval was stuck at block 350000 overnight on 0.21.0, and even with commit-interval 20 is taking way too long. I'm coming up on a full day and I'm only to block 640000 or so for a runes/addresses/no-sats index.

raphjaph commented 1 day ago

We didn't change any indexing logic from 0.20.1 to 0.21.0 so this is weird. Let me try to reindex on our dev server as well.

ep150de commented 1 day ago

I get the same noticeable slowness tooSent via the Samsung Galaxy Note20 Ultra 5G, an AT&T 5G smartphone -------- Original message --------From: raph @.> Date: 10/16/24 15:00 (GMT-08:00) To: ordinals/ord @.> Cc: Subscribed @.***> Subject: Re: [ordinals/ord] Ord 0.21.0 very slow to index (Issue #3999) We didn't change any indexing logic from 0.20.1 to 0.21.0 so this is weird. Let me try to reindex on our dev server as well.

—Reply to this email directly, view it on GitHub, or unsubscribe.You are receiving this because you are subscribed to this thread.Message ID: @.***>

gus4rs commented 1 day ago

I can confirm it is slow as f*ck. Previously I was capable of building a full index in 4 days. Now 4 days later it still indexing blocks from 2016

image

gus4rs commented 14 hours ago

We didn't change any indexing logic from 0.20.1 to 0.21.0

The change was likely done before as nobody re-indexed when 0.20.1 was release because the DB schema version wasn't bumped

Suggestion: setup a nightly representative performance test and plot the results. Take a look for e.g. at https://github.com/sharkdp/hyperfine

so7ow commented 7 hours ago

nobody re-indexed when 0.20.1 was release because the DB schema version wasn't bumped

I actually did reindex with 0.20.1 due to the issue with address indexes crashing, and I didn't see this performance issue in that version.

gus4rs commented 4 hours ago

nobody re-indexed when 0.20.1 was release because the DB schema version wasn't bumped

I actually did reindex with 0.20.1 due to the issue with address indexes crashing, and I didn't see this performance issue in that version.

Did you do a --index-transactions --index-addresses --index-sats --index-runes ?

so7ow commented 4 hours ago

Did you do a --index-transactions --index-addresses --index-sats --index-runes ?

No, just addresses/runes. Same as I'm doing on 21, which has been running for days now. (Used to finish in ~12 hours.)