Closed MysticRyuujin closed 4 years ago
After reboot:
2020-04-19 10:27:54.1786|INFO|1|Grafana / Prometheus metrics are disabled in configuration
2020-04-19 10:27:54.5153|INFO|1|Using http://checkip.amazonaws.com to get external ip
2020-04-19 10:28:36.0023|INFO|1|Block tree initialized, last processed is 0 (0xd4e567...cb8fa3), best queued is 9901872, best known is 9901872, lowest inserted header 1, body 1477070
2020-04-19 10:28:37.4165|INFO|13|Loaded 2 static nodes from file: E:\Nethermind\MyConfig\static-nodes.json
2020-04-19 10:28:37.7574|INFO|13|Static node : <redacted>
2020-04-19 10:28:37.7574|INFO|13|Static node : <redacted>
2020-04-19 10:28:37.8291|INFO|29|HeadersSyncDispatcher has finished work.
2020-04-19 10:28:37.8291|INFO|23|Fast blocks headers task completed.
2020-04-19 10:28:38.1956|ERROR|28|Error during sending message: Type: Ping, FarAddress: [::ffff:34.216.199.10]:25576, FarPublicKey: empty, ExpirationTime: 1587310138, SourceAddress: <redacted>:30305, DestinationAddress: [::ffff:34.216.199.10]:25576, Version: 4, Mdc: empty System.NullReferenceException: Object reference not set to an instance of an object.
at Nethermind.Network.Discovery.DiscoveryManager.SendMessage(DiscoveryMessage discoveryMessage)
2020-04-19 10:28:38.2092|ERROR|18|Error during sending message: Type: Ping, FarAddress: [::ffff:78.47.195.12]:43830, FarPublicKey: empty, ExpirationTime: 1587310138, SourceAddress: <redacted>:30305, DestinationAddress: [::ffff:78.47.195.12]:43830, Version: 4, Mdc: empty System.NullReferenceException: Object reference not set to an instance of an object.
at Nethermind.Network.Discovery.DiscoveryManager.SendMessage(DiscoveryMessage discoveryMessage)
2020-04-19 10:28:38.2092|ERROR|26|Error during sending message: Type: Ping, FarAddress: [::ffff:35.182.207.28]:33264, FarPublicKey: empty, ExpirationTime: 1587310138, SourceAddress: <redacted>:30305, DestinationAddress: [::ffff:35.182.207.28]:33264, Version: 4, Mdc: empty System.NullReferenceException: Object reference not set to an instance of an object.
at Nethermind.Network.Discovery.DiscoveryManager.SendMessage(DiscoveryMessage discoveryMessage)
2020-04-19 10:28:38.2838|ERROR|24|Error during sending message: Type: Ping, FarAddress: [::ffff:178.63.69.19]:38990, FarPublicKey: empty, ExpirationTime: 1587310138, SourceAddress: <redacted>:30305, DestinationAddress: [::ffff:178.63.69.19]:38990, Version: 4, Mdc: empty System.NullReferenceException: Object reference not set to an instance of an object.
at Nethermind.Network.Discovery.DiscoveryManager.SendMessage(DiscoveryMessage discoveryMessage)
2020-04-19 10:28:38.5084|ERROR|14|Error during sending message: Type: Ping, FarAddress: [::ffff:18.216.48.139]:39704, FarPublicKey: empty, ExpirationTime: 1587310138, SourceAddress: <redacted>:30305, DestinationAddress: [::ffff:18.216.48.139]:39704, Version: 4, Mdc: empty System.NullReferenceException: Object reference not set to an instance of an object.
at Nethermind.Network.Discovery.DiscoveryManager.SendMessage(DiscoveryMessage discoveryMessage)
2020-04-19 10:28:38.7190|ERROR|26|Error during sending message: Type: Ping, FarAddress: [::ffff:130.183.206.234]:57940, FarPublicKey: empty, ExpirationTime: 1587310138, SourceAddress: <redacted>:30305, DestinationAddress: [::ffff:130.183.206.234]:57940, Version: 4, Mdc: empty System.NullReferenceException: Object reference not set to an instance of an object.
at Nethermind.Network.Discovery.DiscoveryManager.SendMessage(DiscoveryMessage discoveryMessage)
Bodies resumed downloading without issue, and completed:
2020-04-19 10:33:35.4768|INFO|333|Fast Blocks Sync | Peers 55 / 80
2020-04-19 10:33:35.4768|INFO|333|Fast Blocks Sync | Headers 9884400 / 9884400 | queue 0 | current 0.00bps | total 0.00bps
2020-04-19 10:33:35.4768|INFO|333|Fast Blocks Sync | Bodies 9858467 / 9884400 | queue 25805 | current 2559.24bps | total 4906.23bps
2020-04-19 10:33:35.4768|INFO|333|Fast Blocks Sync | Receipts 0 / 9884400 | queue 0 | current 0.00bps | total 0.00bps
2020-04-19 10:33:36.4916|INFO|325|Fast Blocks Sync | Peers 58 / 81
2020-04-19 10:33:36.4916|INFO|325|Fast Blocks Sync | Headers 9884400 / 9884400 | queue 0 | current 0.00bps | total 0.00bps
2020-04-19 10:33:36.4916|INFO|325|Fast Blocks Sync | Bodies 9858467 / 9884400 | queue 25805 | current 0.00bps | total 4889.46bps
2020-04-19 10:33:36.4916|INFO|325|Fast Blocks Sync | Receipts 0 / 9884400 | queue 0 | current 0.00bps | total 0.00bps
2020-04-19 10:33:36.9589|INFO|333|BodiesSyncDispatcher has finished work.
2020-04-19 10:33:36.9589|INFO|335|Fast blocks bodies task completed.
However, receipts stuck at 0
rocksdb error looks like some issue in communication between the DB and disk driver or some miscalculation in the rocks columns db code
DicoveryMessages showing errors - need to look at it but it seems to be more of a logging issue than the actual problem
receipts staying at zero may be related to the issue with the disk size - you can delete the entrie receipts folder from the drive and it should start receipts from zero and work fine
Happened again. This time around 50% through receipts download.
2020-04-19 18:24:54.4368|INFO|353|Fast Blocks Sync | Peers 231 / 300
2020-04-19 18:24:54.4368|INFO|353|Fast Blocks Sync | Headers 9903700 / 9903700 | queue 0 | current 0.00bps | total 5315.83bps
2020-04-19 18:24:54.4368|INFO|353|Fast Blocks Sync | Bodies 9903700 / 9903700 | queue 0 | current 0.00bps | total 382.45bps
2020-04-19 18:24:54.4368|INFO|353|Fast Blocks Sync | Receipts 4363887 / 9903700 | queue 347 | current 81.35bps | total 154.83bps
2020-04-19 18:24:54.7223|ERROR|341|Fast receipts sync failed RocksDbSharp.RocksDbException: IO error: No space left on deviceFailed to WriteFile: E:/Nethermind/MyData/Database/Mainnet\receipts/028459.log: There is not enough space on the disk.
at RocksDbSharp.Native.rocksdb_put(IntPtr db, IntPtr writeOptions, Byte[] key, Int64 keyLength, Byte[] value, Int64 valueLength, ColumnFamilyHandle cf)
at Nethermind.Db.Rocks.ColumnDb.set_Item(Byte[] key, Byte[] value)
at Nethermind.Blockchain.Receipts.PersistentReceiptStorage.Insert(Block block, TxReceipt[] txReceipts)
at Nethermind.Synchronization.FastBlocks.FastReceiptsSyncFeed.InsertReceipts(List`1 receipts)
at Nethermind.Synchronization.FastBlocks.FastReceiptsSyncFeed.HandleDependentBatches()
at Nethermind.Synchronization.FastBlocks.FastReceiptsSyncFeed.PrepareRequest()
at Nethermind.Synchronization.ParallelSync.SyncDispatcher`1.Start(CancellationToken cancellationToken)
2020-04-19 18:24:55.1584|INFO|195|Removing 1001 out of 11001 peers (cleanup).
2020-04-19 18:24:55.4402|INFO|451|Fast Blocks Sync | Peers 230 / 298
2020-04-19 18:24:55.4402|INFO|451|Fast Blocks Sync | Headers 9903700 / 9903700 | queue 0 | current 0.00bps | total 5315.83bps
2020-04-19 18:24:55.4402|INFO|451|Fast Blocks Sync | Bodies 9903700 / 9903700 | queue 0 | current 0.00bps | total 382.45bps
2020-04-19 18:24:55.4402|INFO|451|Fast Blocks Sync | Receipts 4364135 / 9903700 | queue 345 | current 247.17bps | total 154.84bps
2020-04-19 18:24:56.4520|INFO|341|Fast Blocks Sync | Peers 230 / 298
2020-04-19 18:24:56.4520|INFO|341|Fast Blocks Sync | Headers 9903700 / 9903700 | queue 0 | current 0.00bps | total 5315.83bps
2020-04-19 18:24:56.4520|INFO|341|Fast Blocks Sync | Bodies 9903700 / 9903700 | queue 0 | current 0.00bps | total 382.45bps
2020-04-19 18:24:56.4520|INFO|341|Fast Blocks Sync | Receipts 4364135 / 9903700 | queue 345 | current 0.00bps | total 154.83bps
2020-04-19 18:24:57.4529|INFO|443|Fast Blocks Sync | Peers 230 / 298
2020-04-19 18:24:57.4529|INFO|443|Fast Blocks Sync | Headers 9903700 / 9903700 | queue 0 | current 0.00bps | total 5315.83bps
2020-04-19 18:24:57.4529|INFO|443|Fast Blocks Sync | Bodies 9903700 / 9903700 | queue 0 | current 0.00bps | total 382.45bps
2020-04-19 18:24:57.4529|INFO|443|Fast Blocks Sync | Receipts 4364135 / 9903700 | queue 345 | current 0.00bps | total 154.82bps
2020-04-19 18:24:58.4658|INFO|528|Fast Blocks Sync | Peers 230 / 298
2020-04-19 18:24:58.4658|INFO|528|Fast Blocks Sync | Headers 9903700 / 9903700 | queue 0 | current 0.00bps | total 5315.83bps
2020-04-19 18:24:58.4658|INFO|528|Fast Blocks Sync | Bodies 9903700 / 9903700 | queue 0 | current 0.00bps | total 382.45bps
2020-04-19 18:24:58.4658|INFO|528|Fast Blocks Sync | Receipts 4364135 / 9903700 | queue 345 | current 0.00bps | total 154.82bps
2020-04-19 18:24:59.1217|ERROR|405|Failed to store data in E:/Nethermind/MyData/Database/Mainnet\peers\SimpleFileDb.db System.IO.IOException: There is not enough space on the disk. : 'E:\Nethermind\MyData\Database\Mainnet\peers\SimpleFileDb.db'
at System.IO.FileStream.WriteCore(ReadOnlySpan`1 source)
at System.IO.FileStream.FlushWriteBuffer(Boolean calledFromFinalizer)
at System.IO.FileStream.Dispose(Boolean disposing)
at System.IO.Stream.Close()
at System.IO.StreamWriter.CloseStreamFromDispose(Boolean disposing)
at System.IO.StreamWriter.Dispose(Boolean disposing)
at Nethermind.Network.SimpleFilePublicKeyDb.CommitBatch()
2020-04-19 18:24:59.4667|INFO|443|Fast Blocks Sync | Peers 230 / 298
2020-04-19 18:24:59.4667|INFO|443|Fast Blocks Sync | Headers 9903700 / 9903700 | queue 0 | current 0.00bps | total 5315.83bps
2020-04-19 18:24:59.4667|INFO|443|Fast Blocks Sync | Bodies 9903700 / 9903700 | queue 0 | current 0.00bps | total 382.45bps
2020-04-19 18:24:59.4667|INFO|443|Fast Blocks Sync | Receipts 4364135 / 9903700 | queue 345 | current 0.00bps | total 154.81bps
2020-04-19 18:25:00.4750|INFO|451|Fast Blocks Sync | Peers 231 / 299
I'm still getting this and I'm fairly confident it's not my SSD.
Nethermind wrote 300MB of log files to the same SSD that it thinks is out of space while just dumping these errors over and over and over. I closed Nethermind, copied 80GB of data from another server and pasted it onto the SSD. Wrote all 80GB just fine.
This happened when I switched to the 1.8.x branch.
One of the many log files it wrote, 250,000 lines of the same 2 error messages repeating.
Again, this time with beam sync off, just fast sync.
Interestingly, it seems to be happening at this apparently magic number of 384...
internal error when handling 384 state sync requests
It's also complaining about the state db every single time. I would think that if it really was a disk issue that it would be far more random, and/or possibly throwing different errors.
2020-04-30 22:51:45.4257|ERROR|596|Error when handling state sync response RocksDbSharp.RocksDbException: IO error: No space left on deviceFailed to WriteFile: E:/Nethermind/MyData/Database/Mainnet\state/002959.log: There is not enough space on the disk.
at RocksDbSharp.Native.rocksdb_put(IntPtr db, IntPtr writeOptions, Byte[] key, Int64 keyLength, Byte[] value, Int64 valueLength, ColumnFamilyHandle cf)
at Nethermind.Db.Rocks.DbOnTheRocks.set_Item(Byte[] key, Byte[] value)
at Nethermind.Synchronization.FastSync.StateSyncFeed.SaveNode(StateSyncItem syncItem, Byte[] data)
at Nethermind.Synchronization.FastSync.StateSyncFeed.HandleTrieNode(StateSyncItem currentStateSyncItem, Byte[] currentResponseItem, Int32& invalidNodes)
at Nethermind.Synchronization.FastSync.StateSyncFeed.HandleResponse(StateSyncBatch batch)
2020-04-30 22:51:45.4257|ERROR|596|Feed Nethermind.Synchronization.FastSync.StateSyncFeed has reported an internal error when handling 384 state sync requests with 347 responses
384 is a standard size of the node data request - every single of a few million requests is more or less 384 in length (256 + 128) Can you confirm that you tried with two different hard drives already?
pt., 1 maj 2020 o 14:58 Chase Wright notifications@github.com napisał(a):
Again, this time with beam sync off, just fast sync.
Interestingly, it seems to be happening at this apparently magic number of 384...
internal error when handling 384 state sync requests
2020-04-30 22:51:45.4257|ERROR|596|Error when handling state sync response RocksDbSharp.RocksDbException: IO error: No space left on deviceFailed to WriteFile: E:/Nethermind/MyData/Database/Mainnet\state/002959.log: There is not enough space on the disk.
at RocksDbSharp.Native.rocksdb_put(IntPtr db, IntPtr writeOptions, Byte[] key, Int64 keyLength, Byte[] value, Int64 valueLength, ColumnFamilyHandle cf) at Nethermind.Db.Rocks.DbOnTheRocks.set_Item(Byte[] key, Byte[] value) at Nethermind.Synchronization.FastSync.StateSyncFeed.SaveNode(StateSyncItem syncItem, Byte[] data) at Nethermind.Synchronization.FastSync.StateSyncFeed.HandleTrieNode(StateSyncItem currentStateSyncItem, Byte[] currentResponseItem, Int32& invalidNodes) at Nethermind.Synchronization.FastSync.StateSyncFeed.HandleResponse(StateSyncBatch batch) 2020-04-30 22:51:45.4257|ERROR|596|Feed Nethermind.Synchronization.FastSync.StateSyncFeed has reported an internal error when handling 384 state sync requests with 347 responses
— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/NethermindEth/nethermind/issues/1694#issuecomment-622399313, or unsubscribe https://github.com/notifications/unsubscribe-auth/AADZZYMWHFHXD7K5KHRUFK3RPLIQPANCNFSM4ML2VU2Q .
Version : 1.8.4-0-938dde909-20200419/X64-Microsoft Windows 10.0.18363/Core3.1.1
During sync:
I have plenty of free space...