Closed uniibu closed 4 years ago
Encountered a similar issue when upgrading from v0.13.3 to v0.14.0.1, however my node showed no indication of being stuck at a specific block.
Had to do a writeup, might help someone (https://gist.github.com/Auronmatrix/8d3e4caffdb650c9d7ab485772656ecc)
Not sure what exactly happened, but seemed like my node was part of a separate network segment that continued to function but was not actually showing the correct data.
Not sure what exactly happened, but seemed like my node was part of a separate network segment that continued to function but was not actually showing the correct data.
@Auronmatrix yes, pre-0.14 nodes forked off because they were not able to recognize coinbase extra payload v2 (https://github.com/dashpay/dips/blob/master/dip-0004.md#coinbase-special-transaction) which was activated via bit 4. Few miners and explorers did not upgrade and kept extending/displaying the wrong chain which caused a confusion.
Nice post mortem btw 👍
Was the fork to be expected? Would be good if there was some warnings (maybe as part of the 0.14 release notes) for those running pre-0.14?
From the logs nothing was obvious and things just seemed business as usual except for the data I got from the RPC making no sense.
@Auronmatrix Yes, the fork was expected. There is a small note that downgrade to 0.13 is only possible until dip8 activation https://github.com/dashpay/dash/blob/master/doc/release-notes/dash/release-notes-0.14.0.md#downgrade-to-versions-01300---01330 and we communicated that if you do not upgrade you are going to be forked off in all media channels as well I think. But I agree, we should probably make it more obvious in release notes if new upgrade is a hard fork or not. Smth to consider for future release notes I guess @codablock @nmarley
I would not have understood downgrade to mean staying on the same version while the DIP activates. Only took that warning to mean upgrading to v0.14, would cause changes to the chain state/format that would not be possible to undo without a full reindex if you wish to go back to a pre-0.14 version. Looks like I might not have been the only one :wink:. Would be great to be super explicit in the release notes, that's primarily what we follow very closely.
In order for this to work, I also had to run:
dash-cli invalidateblock 00000000000000112e41e4b3afda8b233b8cc07c532d2eac5de097b68358c43e
And restart the daemon. Then it appeared to rewind to the fork point (progress going backwards) and resyncing from there.
Issue solved
We have solved described issue below, and this is just a record of that described issue and could serve as a guide for anyone experiencing the same.
Describe the issue
We recently update our Dash Core from 0.13.0.0 to 0.14.0.1 for the reason that our dashd stopped syncing (we believe this might be the original issue). After updating and checking the logs we found out that dashd is stuck on block
1088639
or block hash00000000000000112e41e4b3afda8b233b8cc07c532d2eac5de097b68358c43e
.We tried to restart dashd with
--reindex-chainstate
and--reindex
with both instances also getting stuck on the same block.I remember we had a very similar issue before when BCH forked with BSV. So we tried the same solution by running the command:
And this seemed to worked and our dashd started to sync immediately without even restarting it.
Expected behaviour
Dashd should continue to sync.
Actual behaviour
Dashd got stuck upon syncing.
What version of Dash Core are you using?
Machine specs:
Any extra information that might be useful in the debugging process.
A portion of the debug.log can be seen at https://pastebin.com/dzNr5NUR