OffchainLabs / nitro

Nitro goes vroom and fixes everything
Other
752 stars 446 forks source link

Cant sync fresh node from snapshot #2231

Closed hdiass closed 5 months ago

hdiass commented 7 months ago

Describe the bug Cant sync fresh node from snapshot. v2.3.3-6a1c1a7 Yesterday booted a node from scratch using snapshot and it can't sync.

INFO [04-12|11:53:43.146] latest assertion not yet in our node     staker=0x0000000000000000000000000000000000000000 assertion=13127 state="{BlockHash:0x917091e908d7fb9e281c53a1acec0cee529ec13dbebb0542375dfb95b0832333 SendRoot:0x984a1d7bf690b0b94ed7a951acdf3c7a5057b0b6b21f7103d3fb9d61f79e3027 Batch:586484 PosInBatch:89}"
INFO [04-12|11:53:44.100] catching up to chain batches             localBatches=582,290 target=586,485
WARN [04-12|11:54:04.284] error reading inbox                      err="failed to get blobs: error fetching blobs in 19501975 l1 block: expected at least 6 blobs for slot 8702476 but only got 0"

Eth client geth v1.13.14 prysm v5.0.3 using checkpoint sync and --enable-experimental-backfill

nitro args used

 - --persistent.chain=/database/
    - --parent-chain.blob-client.beacon-url=https://eth-mainnet-beacon
    - --http.port=8545
    - --http.api=net,web3,eth
    - --http.corsdomain=*
    - --http.addr=0.0.0.0
    - --http.vhosts=*
    - --ws.port=8546
    - --ws.addr=0.0.0.0
    - --ws.origins=*
    - --execution.rpc.gas-cap=0
    - --execution.rpc.tx-fee-cap=0
    - --metrics
    - --metrics-server.addr=0.0.0.0
    - --metrics-server.port=6060
    - --parent-chain.connection.url=https://eth-mainnet
    - --chain.id=42161
    - --init.url=https://snapshot.arbitrum.foundation/arb1/nitro-pruned.tar
    - --init.download-path=/database/snapshot.tar
    - --rpc.max-batch-response-size=200000000

To Reproduce Steps to reproduce the behavior:

  1. Boot node from scratch using snapshot

Expected behavior A clear and concise description of what you expected to happen.

Screenshots If applicable, add screenshots to help explain your problem.

Additional context Add any other context about the problem here.

NicolasWent commented 7 months ago

Hello,

I have the exact same issue, did you found a solution?

Using reth and lighthouse as node clients

miki-bgd-011 commented 7 months ago

I too have the same issue!

Prysm 5.0.3 + geth version 1.13.14-stable-2bd6bd01

NicolasWent commented 7 months ago

Are you guys using offchainlabs/nitro-node:v2.3.3-6a1c1a7 ?

Because I was using offchainlabs/nitro-node:v2.3.2-064fa11 but when I switched to the latest one: offchainlabs/nitro-node:v2.3.3-6a1c1a7, I don't see the error anymore.

I am not sure that my node is syncing correctly tho

EDIT: Actually the error is still there, it appeared after 1h of running the node

miki-bgd-011 commented 7 months ago

I am getting the same error with v2.3.3-6a1c1a7

nisdas commented 7 months ago

Hey guys, the reason its unable to sync is because the snapshot is older and the nitro node is requesting for already expired blobs. As a way to unblock your node you can try using an archival beacon rpc provider. They will be able to momentarily provide the blobs: https://docs.arbitrum.io/node-running/reference/ethereum-beacon-rpc-providers#list-of-ethereum-beacon-chain-rpc-providers

miki-bgd-011 commented 7 months ago

Hey guys, the reason its unable to sync is because the snapshot is older and the nitro node is requesting for already expired blobs. As a way to unblock your node you can try using an archival beacon rpc provider. They will be able to momentarily provide the blobs: https://docs.arbitrum.io/node-running/reference/ethereum-beacon-rpc-providers#list-of-ethereum-beacon-chain-rpc-providers

This did not work for me.

ZYS980327 commented 6 months ago

@hdiass ,hi,do you have a better solution?

ZYS980327 commented 6 months ago

@hdiass I am the same client and beacon chain as you are

ZYS980327 commented 6 months ago

@hdiass @nisdas There seems to be a problem with the slot 8702476

ZYS980327 commented 6 months ago

INFO [04-15|01:57:24.481] Loaded most recent local block number=193,592,599 hash=c758a4..c8df38 td=171,384,783 age=3w21h46m WARN [04-15|01:57:24.498] Head state missing, repairing number=193,592,599 hash=c758a4..c8df38 snaproot=f8707d..e937dd INFO [04-15|01:57:27.743] Loaded most recent local header number=193,592,599 hash=c758a4..c8df38 td=171,384,783 age=3w21h46m INFO [04-15|01:57:27.743] Loaded most recent local block number=193,592,472 hash=0f9973..992472 td=171,384,656 age=3w21h46m INFO [04-15|01:57:27.743] Loaded most recent local snap block number=193,592,599 hash=c758a4..c8df38 td=171,384,783 age=3w21h46m WARN [04-15|01:57:27.763] Enabling snapshot recovery chainhead=193,592,472 diskbase=193,592,472 INFO [04-15|01:57:27.764] loaded genesis block from database number=22,207,817 hash=7d237d..c07986 INFO [04-15|01:57:27.764] Initialized transaction indexer limit=0 INFO [04-15|01:57:27.879] Using leveldb as the backing database INFO [04-15|01:57:27.879] Allocated cache and file handles database=/home/user/.arbitrum/arb1/nitro/arbitrumdata cache=16.00MiB handles=16 INFO [04-15|01:57:28.144] Using LevelDB as the backing database INFO [04-15|01:57:28.178] Using leveldb as the backing database INFO [04-15|01:57:28.178] Allocated cache and file handles database=/home/user/.arbitrum/arb1/nitro/classic-msg cache=16.00MiB handles=16 readonly=true INFO [04-15|01:57:28.179] Using LevelDB as the backing database INFO [04-15|01:57:28.184] running as validator txSender= actingAsWallet=nil whitelisted=false strategy=Watchtower INFO [04-15|01:57:28.191] Starting peer-to-peer node instance=nitro/v2.3.2-064fa11/linux-amd64/go1.20.14 WARN [04-15|01:57:28.191] P2P server will be useless, neither dialing nor listening INFO [04-15|01:57:28.213] HTTP server started endpoint=[::]:8547 auth=false prefix= cors= vhosts= INFO [04-15|01:57:28.213] New local node record seq=1,713,146,248,213 id=74c642d3240caa0c ip=127.0.0.1 udp=0 tcp=0 INFO [04-15|01:57:28.213] Started P2P networking self=enode://915a1959b8fdfe5e8f5be2e9a11c3590171a336eea01b42d07bae0e964d4de3b8caee4a9da733afcfebf67cec648a124d3a8b8cddf6cd9043c1728a682db6e33@127.0.0.1:0 INFO [04-15|01:57:28.238] rpc response method=eth_call logId=13 err="execution reverted" result="\"0x\"" attempt=0 args="[{\"from\":\"0x0000000000000000000000000000000000000000\",\"input\":\"0xf63a434a0000000000000000000000000000000000000000000000000000000000000000\",\"to\":\"0x5ef0d09d1e6204141b4d37530808ed19f60fba35\"}, \"latest\"]" errorData=null INFO [04-15|01:57:28.272] validation not set up err="timeout trying to connect lastError: dial tcp :80: connect: connection refused" INFO [04-15|01:57:28.334] latest assertion not yet in our node staker=0x0000000000000000000000000000000000000000 assertion=13188 state="{BlockHash:0x5e034aa3599073080d5d98fb72120cc7ac57f4364bdf28538d9a7873658f13b5 SendRoot:0x86f266ca2d5b0372d813e9ac8f4de941f2d6a072fc3817d5eb70440a4a881889 Batch:587258 PosInBatch:0}" INFO [04-15|01:57:28.338] catching up to chain batches localBatches=582,290 target=587,258 WARN [04-15|01:57:28.508] error reading inbox err="failed to get blobs: expected at least 6 blobs for slot 8702476 but only got 0" WARN [04-15|01:57:29.560] error reading inbox err="failed to get blobs: expected at least 6 blobs for slot 8702476 but only got 0" INFO [04-15|01:57:29.961] created block l2Block=193,592,473 l2BlockHash=329fa3..ab34db WARN [04-15|01:57:30.617] error reading inbox err="failed to get blobs: expected at least 6 blobs for slot 8702476 but only got 0" INFO [04-15|01:57:30.962] created block l2Block=193,592,474 l2BlockHash=30c661..e46047 WARN [04-15|01:57:31.671] error reading inbox err="failed to get blobs: expected at least 6 blobs for slot 8702476 but only got 0" INFO [04-15|01:57:31.963] created block l2Block=193,592,475 l2BlockHash=f6a06a..473b26 WARN [04-15|01:57:32.724] error reading inbox err="failed to get blobs: expected at least 6 blobs for slot 8702476 but only got 0" INFO [04-15|01:57:32.964] created block l2Block=193,592,476 l2BlockHash=60676b..b8a284 WARN [04-15|01:57:33.779] error reading inbox err="failed to get blobs: expected at least 6 blobs for slot 8702476 but only got 0" INFO [04-15|01:57:34.776] created block l2Block=193,592,477 l2BlockHash=364f72..6b6815 WARN [04-15|01:57:34.841] error reading inbox err="failed to get blobs: expected at least 6 blobs for slot 8702476 but only got 0" INFO [04-15|01:57:35.777] created block l2Block=193,592,478 l2BlockHash=2d5615..634fcc WARN [04-15|01:57:35.896] error reading inbox err="failed to get blobs: expected at least 6 blobs for slot 8702476 but only got 0" INFO [04-15|01:57:36.777] created block l2Block=193,592,479 l2BlockHash=b24185..59a02c WARN [04-15|01:57:36.960] error reading inbox err="failed to get blobs: expected at least 6 blobs for slot 8702476 but only got 0"

ZYS980327 commented 6 months ago

@nisdas @hdiass The --parent-chain.blob-client.beacon-url flag in my command was replaced by a local prysmrpc quicknode's Ethereum beacon rpc looks like it's synced a bit, and there's no problem with the start to initialize snapshot to sync。 INFO [04-15|02:14:46.197] Unindexing transactions blocks=19,587,000 txs=22,043,742 total=67,362,073 elapsed=6m1.193s INFO [04-15|02:14:46.730] created block l2Block=193,600,621 l2BlockHash=391b29.. 83c418 INFO [04-15|02:14:47.153] latest assertion not yet in our node staker=0x0000000000000000000000000000000000000000 assertion=13189 state="{BlockHash: 0x1c36f86ccde2f6c2c07abfd1a6d1b77e4c66bea1e6cc5e3a56bc662b8d4db456 SendRoot:0x2dd0cab6836e0433d8dd581770f08487fd79f83c4bd59314aaf956abb0e0d74d Batch:587272 PosInBatch:758}" INFO [04-15|02:14:47.290] catching up to chain batches localBatches=582,609 target=587,273

ZYS980327 commented 6 months ago

@nisdas ,hi, But I still want to use a local beacon RPC, how do I modify my prysm

nisdas commented 6 months ago

@ZYS980327 Hey, after the arbitrum node is synced you can use your local prysm node. You only need the archival blobs if the snapshot is old

nisdas commented 6 months ago

@miki-bgd-011 Do you have any specific logs for this ?

ZYS980327 commented 6 months ago

@nisdas INFO [04-15|04:08:48.545] latest assertion not yet in our node staker=0x0000000000000000000000000000000000000000 assertion=13191 state="{BlockHash:0x603c342a38a945b720e14601c6ae1a65257766ae93fe2aef4dd21e26b1ba77a2 SendRoot:0xdc2194034507b095e60dcffacb8a5e1d7713b2919f94e30de12b4233d8956554 Batch:587299 PosInBatch:0}" INFO [04-15|04:08:49.143] created block l2Block=193,859,337 l2BlockHash=5dfeec..8060e2 INFO [04-15|04:08:50.143] created block l2Block=193,859,399 l2BlockHash=046673..93cce4 INFO [04-15|04:08:50.944] catching up to chain batches localBatches=584,280 target=587,299 INFO [04-15|04:08:51.146] created block l2Block=193,859,463 l2BlockHash=060a19..4d0bfb INFO [04-15|04:08:52.147] created block l2Block=193,859,466 l2BlockHash=6fded8..03cc3d INFO [04-15|04:08:53.148] created block l2Block=193,859,501 l2BlockHash=07db3b..87af59 INFO [04-15|04:08:54.148] created block l2Block=193,859,552 l2BlockHash=801eec..0e4b55 INFO [04-15|04:08:55.149] created block l2Block=193,859,587 l2BlockHash=c4bec7..ad4eea INFO [04-15|04:08:56.157] created block l2Block=193,859,628 l2BlockHash=bfa06f..f110cd INFO [04-15|04:08:57.157] created block l2Block=193,859,690 l2BlockHash=ee2ee3..57d9b2 INFO [04-15|04:08:58.157] created block l2Block=193,859,757 l2BlockHash=fe04a7..dc46c4 INFO [04-15|04:08:59.158] created block l2Block=193,859,830 l2BlockHash=5f6a0e..a86d3a INFO [04-15|04:09:00.158] created block l2Block=193,859,862 l2BlockHash=c679fb..3ada00 INFO [04-15|04:09:01.159] created block l2Block=193,859,918 l2BlockHash=bb36e7..f739a9 INFO [04-15|04:09:02.159] created block l2Block=193,859,983 l2BlockHash=04ee8d..5c5806 INFO [04-15|04:09:03.160] created block l2Block=193,860,049 l2BlockHash=ec4ef2..0bf0df INFO [04-15|04:09:04.160] created block l2Block=193,860,090 l2BlockHash=b4da43..443822 INFO [04-15|04:09:05.160] created block l2Block=193,860,143 l2BlockHash=6dafd6..8844b1 INFO [04-15|04:09:06.162] created block l2Block=193,860,181 l2BlockHash=cd7b7f..8e42d0 INFO [04-15|04:09:07.162] created block l2Block=193,860,251 l2BlockHash=dc33db..3cd5b4 INFO [04-15|04:09:08.163] created block l2Block=193,860,301 l2BlockHash=6a6531..d88671 INFO [04-15|04:09:09.164] created block l2Block=193,860,358 l2BlockHash=6036b6..994567 INFO [04-15|04:09:10.164] created block l2Block=193,860,422 l2BlockHash=25cee4..9a9e32 INFO [04-15|04:09:11.165] created block l2Block=193,860,488 l2BlockHash=6cd25b..f03507 INFO [04-15|04:09:12.165] created block l2Block=193,860,557 l2BlockHash=2f1f69..d0b812 INFO [04-15|04:09:13.166] created block l2Block=193,860,597 l2BlockHash=deadf0..0583e5 INFO [04-15|04:09:14.166] created block l2Block=193,860,660 l2BlockHash=8bac0c..58eb9f INFO [04-15|04:09:15.167] created block l2Block=193,860,727 l2BlockHash=c76522..198495 INFO [04-15|04:09:16.168] created block l2Block=193,860,784 l2BlockHash=6e38a9..c00150 INFO [04-15|04:09:17.170] created block l2Block=193,860,852 l2BlockHash=a7d1e7..f8e4e4 INFO [04-15|04:09:18.170] created block l2Block=193,860,893 l2BlockHash=f6ca67..990ebb INFO [04-15|04:09:19.171] created block l2Block=193,860,959 l2BlockHash=1f23a0..5e2fc5 INFO [04-15|04:09:20.171] created block l2Block=193,861,039 l2BlockHash=4bf55c..54e8b6 INFO [04-15|04:09:21.171] created block l2Block=193,861,103 l2BlockHash=02fffe..f3ae48 INFO [04-15|04:09:22.171] created block l2Block=193,861,170 l2BlockHash=d3f6d4..6fb5bd INFO [04-15|04:09:23.174] created block l2Block=193,861,219 l2BlockHash=0d59e5..b4250b INFO [04-15|04:09:24.175] created block l2Block=193,861,288 l2BlockHash=7d3986..0fa432 INFO [04-15|04:09:25.176] created block l2Block=193,861,362 l2BlockHash=d42c31..f6d7de INFO [04-15|04:09:26.176] created block l2Block=193,861,431 l2BlockHash=47d00e..184dbe INFO [04-15|04:09:27.177] created block l2Block=193,861,510 l2BlockHash=417799..2c24b5 INFO [04-15|04:09:28.177] created block l2Block=193,861,591 l2BlockHash=103965..de5158 INFO [04-15|04:09:29.178] created block l2Block=193,861,629 l2BlockHash=2c2372..3510aa INFO [04-15|04:09:30.179] created block l2Block=193,861,690 l2BlockHash=a35411..726f8a INFO [04-15|04:09:31.180] created block l2Block=193,861,732 l2BlockHash=b9e16c..0bcfbe INFO [04-15|04:09:32.181] created block l2Block=193,861,820 l2BlockHash=a3938e..7d908d INFO [04-15|04:09:33.182] created block l2Block=193,861,889 l2BlockHash=e8597e..c0985e INFO [04-15|04:09:34.182] created block l2Block=193,861,963 l2BlockHash=eef03e..5d6836 INFO [04-15|04:09:35.183] created block l2Block=193,862,034 l2BlockHash=644601..2978fa WARN [04-15|04:09:35.782] error reading inbox err="failed to get blobs: error calling beacon client in blobSidecars: unexpected end of JSON input"

ZYS980327 commented 6 months ago

@nisdas But I don't know how to restart, I can only add --init-url and resync from the snapshot

ZYS980327 commented 6 months ago

@nisdas But I don't know how to restart, I can only add --init-url and resync from the snapshot

ZYS980327 commented 6 months ago

@nisdas And after starting the arbitrum node, the connection to port 8547 to get the block data is denied

nisdas commented 6 months ago

You would just need to replace this flag --parent-chain.blob-client.beacon-url @ZYS980327

ZYS980327 commented 6 months ago

@nisdas docker run -d --privileged --rm -it -v /usr/local/nitro-snap/:/usr/local/nitro-snap/ -p 0.0.0.0:8547:8547 -p 0.0.0.0:8548:8548 offchainlabs/nitro-node:v2.3.2-064fa11 --parent-chain.connection.url http://10.150.20.11:8545 --chain.id=42161 --parent-chain.blob-client.beacon-url=http://10.150.20.11:3500 --http.api=net,web3,eth,arb,debug --http.corsdomain= --http.addr=0.0.0.0 --http.vhosts= --init.url="file:///usr/local/nitro-snap/nitro-pruned.tar" --init.download-path=/usr/local/nitro-snap/snapshot-8547.tar​ After switching to the local one, the snapshot is still synchronized from the original one, and port 8547 refuses to connect, and the blob still cannot be obtained ,I don't know if --init-download is useful, but there is no similar file generation until now

nisdas commented 6 months ago

Is the prysm node running ? what are your prysm and arbitrum logs

ZYS980327 commented 6 months ago

image

ZYS980327 commented 6 months ago

prysm

ZYS980327 commented 6 months ago

image

ZYS980327 commented 6 months ago

@nisdas Always 8702476

nisdas commented 6 months ago

Does your prysm node have peers ? it appears to be stuck for some reason

ZYS980327 commented 6 months ago

yes, image

ZYS980327 commented 6 months ago

@nisdas Prysm is normal

nisdas commented 6 months ago

@ZYS980327 This particular slot was 3 weeks old: https://beaconcha.in/slot/8702746

Just to confirm, you are just restarting the nitro node and replacing it with the local beacon rpc url ?

ZYS980327 commented 6 months ago

@nisdas Yes, and --init-download-path

nisdas commented 6 months ago

Ok, it does appear that you need to use the archival beacon rpc to sync till head. Your logs indicate that did not happen. You can only shift to prysm once this has been done. Ex: https://arbiscan.io/block/193,592,599

This is 22 days ago

ZYS980327 commented 6 months ago

@nisdas But how to switch

nisdas commented 6 months ago

@nisdas @hdiass The --parent-chain.blob-client.beacon-url flag in my command was replaced by a local prysmrpc quicknode's Ethereum beacon rpc looks like it's synced a bit, and there's no problem with the start to initialize snapshot to sync。 INFO [04-15|02:14:46.197] Unindexing transactions blocks=19,587,000 txs=22,043,742 total=67,362,073 elapsed=6m1.193s INFO [04-15|02:14:46.730] created block l2Block=193,600,621 l2BlockHash=391b29.. 83c418 INFO [04-15|02:14:47.153] latest assertion not yet in our node staker=0x0000000000000000000000000000000000000000 assertion=13189 state="{BlockHash: 0x1c36f86ccde2f6c2c07abfd1a6d1b77e4c66bea1e6cc5e3a56bc662b8d4db456 SendRoot:0x2dd0cab6836e0433d8dd581770f08487fd79f83c4bd59314aaf956abb0e0d74d Batch:587272 PosInBatch:758}" INFO [04-15|02:14:47.290] catching up to chain batches localBatches=582,609 target=587,273

The same way you were using quick node's rpc

ZYS980327 commented 6 months ago

@nisdas But it commands to resync from the snapshot, and after the replacement is done, it doesn't start from the block I stopped

nisdas commented 6 months ago

@ZYS980327 , this is prob at the edge of my knowledge but will wait for those from the nitro team to chime in

ZYS980327 commented 6 months ago

@nisdas Okay, thanks for the answer, and about how often the official snapshot will be uploaded?

limitrinno commented 6 months ago
ZYS980327 commented 6 months ago

@limitrinno Can you take a look at your execution commands, and the configured geth and prysm startup commands or environments

hdiass commented 6 months ago

This doesnt work with chainstack, so the instructions in that URL for beacon chain providers is wrong. Its bad that this requirements are not clearly documented so that we can use entirely own rpc's.

hdiass commented 6 months ago

The only way i found to sync this was getting an updated snapshot.

limitrinno commented 6 months ago

The only way i found to sync this was getting an updated snapshot.

I just downloaded the snapshot today, but I'm still encountering the same error. I'm currently trying with Ankr's API.

limitrinno commented 6 months ago

@limitrinno Can you take a look at your execution commands, and the configured geth and prysm startup commands or environments

I'm currently using the latest version of the Arb Docker container on Ubuntu 20. Both sets of environment parameters and configurations are the same. The only difference is that the one experiencing issues was started synchronizing recently.

kaber2 commented 6 months ago

I fully backfilled blob data in prysm:

prysm[331287]: time="2024-04-15 14:12:56" level=info msg="Backfill batches processed" batchesRemaining=12 importable=2 imported=2 prefix=backfill
prysm[331287]: time="2024-04-15 14:12:57" level=info msg="Backfill batches processed" batchesRemaining=11 importable=1 imported=1 prefix=backfill
prysm[331287]: time="2024-04-15 14:12:58" level=info msg="Backfill batches processed" batchesRemaining=11 importable=0 imported=0 prefix=backfill
prysm[331287]: time="2024-04-15 14:12:58" level=info msg="Backfill batches processed" batchesRemaining=9 importable=2 imported=2 prefix=backfill
prysm[331287]: time="2024-04-15 14:13:00" level=info msg="Backfill batches processed" batchesRemaining=8 importable=1 imported=1 prefix=backfill
prysm[331287]: time="2024-04-15 14:13:00" level=info msg="Backfill batches processed" batchesRemaining=7 importable=1 imported=1 prefix=backfill
prysm[331287]: time="2024-04-15 14:13:00" level=info msg="Backfill batches processed" batchesRemaining=7 importable=0 imported=0 prefix=backfill
prysm[331287]: time="2024-04-15 14:13:01" level=info msg="Backfill batches processed" batchesRemaining=5 importable=2 imported=2 prefix=backfill
prysm[331287]: time="2024-04-15 14:13:01" level=info msg="Backfill batches processed" batchesRemaining=5 importable=0 imported=0 prefix=backfill
prysm[331287]: time="2024-04-15 14:13:03" level=info msg="Backfill batches processed" batchesRemaining=0 importable=1 imported=1 prefix=backfill

Still receiving the same error for slot 8702476. Any suggestions besides using an external RPC?

kaber2 commented 6 months ago

Tried using an external RPC node:

--parent-chain.blob-client.beacon-url=https://ethereum-mainnet.core.chainstack.com/beacon/xxxxxxxxxxxxxxxxxxxxxxx

Still getting the same error.

sbond14 commented 6 months ago

Seems like this could very easily be fixed by the team uploading snapshots more frequently than every couple months...

Has no one found a solution??

kaber2 commented 6 months ago

Agreed.

Generally, I tend to think that this is an issue with the snapshot itself. On another node, my prysm blob directory has exactly the same entries as on this new node, and I synced succesfully from it just two or three weeks ago using the same nitro version.

nisdas commented 6 months ago

@nisdas But it commands to resync from the snapshot, and after the replacement is done, it doesn't start from the block I stopped

So nitro will ignore the snapshot if there already is a database @ZYS980327 . Is it possible you are running from a new directory on each restart ?

ZYS980327 commented 6 months ago

@nisdas The command is the same every time, and occasionally the port number may be changed docker run -d --privileged --rm -it -v /usr/local/nitro-snap/:/usr/local/nitro-snap/ -p 0.0.0.0:8557:8557 -p 0.0.0.0:8558:8558 offchainlabs/nitro-node: v2.3.2-064fa11 --parent-chain.connection.url http://10.150.20.11:8545 --chain.id=42161 --parent-chain.blob-client.beacon-url=http://10.150.20.11:3500 --http.api=net,web3, eth,arb,debug --http.corsdomain= --http.addr=0.0.0.0 --http.vhosts= --init.url="file:///nitro-pruned.tar" ...

ZYS980327 commented 6 months ago

@nisdas On Nitro 2.3.3, the developer said that the content that did not get blobs was optimized, but the snapshot is still the same slot that was problematic three weeks ago, so 2.3.3 is useless

nisdas commented 6 months ago

@ZYS980327 you cant run the image with rm it removes all the data including the nitro db on a restart. I suggest running this all again without that flag