Closed crackfoo closed 6 years ago
Our pool is experiencing a similar problem: https://poolmining.org/pool/dash
For the last 36 hours all mined blocks ended up as orphans.
Crackfoo, any update on the orphan issue? 7 orphans in a row at poolmining.
Still waiting for some dash dev to give some insight. Perhaps they restricted what pools can mine dash now.
On Thu, Dec 28, 2017 at 7:32 AM BlueStateBandit notifications@github.com wrote:
Crackfoo, any update on the orphan issue? 7 orphans in a row at poolmining.
— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/dashpay/dash/issues/1822#issuecomment-354281505, or mute the thread https://github.com/notifications/unsubscribe-auth/AGAv4oFrSYxKd4vr-q2Krfa2iA9LTKNjks5tE4pHgaJpZM4RN8Mp .
The same issue is discussed in p2pool repo https://github.com/dashpay/p2pool-dash/issues/43, see my replies there. tl;dr: as far as I can see blocks are relayed properly and after they are relayed dashd has nothing to do with it, it's up to pools to pick the chain tip on top of which they mine.
Perhaps they restricted what pools can mine dash now.
That's not possible (to hide). The code source is right here, you can compile it for yourself. And again, orphan != invalid, these blocks were accepted before they were overridden by the "longer" chain.
Almost sounds like a problem with 12.2.2 that gets emphasized the more nodes upgrade to that version.
This is a 51% Attack https://bitinfocharts.com/dash/address/XvhExSNNr97U1ZenWFjJmgD8wc7v88ZUF7
@ilsawa Who's do you think is behind the attack?
I think there is no 51% thing.
The addr is Antpool
https://chainz.cryptoid.info/dash/address.dws?XvhExSNNr97U1ZenWFjJmgD8wc7v88ZUF7.htm
Click Raw Transaction
will show Coinbase as Text.
https://chainz.cryptoid.info/dash/tx.dws?5167063.htm
⋅ç ⋅⋅Mined by AntPool sz0 ⋅ ZE8ÁR⋅⋅⋅⋅⋅⋅(⋅⋅⋅
Check extraction status https://chainz.cryptoid.info/dash/#!extraction
On the block 794500 - 51% https://chainz.cryptoid.info/dash/extraction.dws?38.htm
guess that answers why. Are they manipulating the chain on purpose?
I've dropped it from my pool. Just a waste of hash.
On Thu, Dec 28, 2017 at 2:58 PM, ilsawa notifications@github.com wrote:
On the block 794500 - 51% https://chainz.cryptoid.info/dash/extraction.dws?38.htm
— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/dashpay/dash/issues/1822#issuecomment-354337847, or mute the thread https://github.com/notifications/unsubscribe-auth/AGAv4uQZANU4TxPdRyzTFpGzCVwE-SA2ks5tE-TVgaJpZM4RN8Mp .
What is this??? Why keep two blocks with the same generation time?
It seems that the network is divided into 2 segments. In one, the blocks do not linger, but in the other - the brakes. All nodes of the p2pool are in the slow segment. I do not see a solution to this problem. We can state the fact: one more p2pool has died.
called reorganization
grep 79509[1-2] .dashcore/debug.log
2017-12-29 05:06:46 UpdateTip: new best=000000000000001004545a2a3e3ce38d7a980f06751a8cc8320240bcb3b0cd19 height=795091 log2_work=72.970212 tx=4502124 date=2017-12-29 05:06:14 progress=0.999999 cache=8.0MiB(57938txo)
2017-12-29 05:10:14 UpdateTip: new best=000000000000000788678a960f8a216a39c3cb992e145b043c3a557c40fdbd1f height=795091 log2_work=72.970212 tx=4502107 date=2017-12-29 05:03:28 progress=0.999985 cache=8.0MiB(57994txo)
2017-12-29 05:10:14 UpdateTip: new best=00000000000000084299e2a4c51888c0b3ea7d09cd8f31953fa6a4d04307d7c2 height=795092 log2_work=72.970268 tx=4502146 date=2017-12-29 05:09:37 progress=0.999999 cache=8.0MiB(57968tx
grep 'height=79506[5-6]' .dashcore/debug.log
2017-12-29 03:48:56 UpdateTip: new best=0000000000000009aadea1119c84ddf5c08535afc167feb8ebb53bb732144a5d height=795065 log2_work=72.968446 tx=4501688 date=2017-12-29 03:48:39 progress=0.999999 cache=7.9MiB(56908txo)
2017-12-29 03:50:36 UpdateTip: new best=00000000000000151fef418aabe5ba4d9fdf1141afd164186971c7b1d30a3940 height=795065 log2_work=72.968446 tx=4501690 date=2017-12-29 03:49:34 progress=0.999998 cache=7.9MiB(56931txo)
2017-12-29 03:50:36 UpdateTip: new best=00000000000000089118b8e1b719c905452d1725cd8591fcfde7fab8168ba3b4 height=795066 log2_work=72.968511 tx=4501697 date=2017-12-29 03:50:26 progress=1.000000 cache=7.9MiB(56936txo)
check your dashd logs
log from my node
grep 79506[5-6] .dashcore/debug.log
2017-12-29 03:48:55 UpdateTip: new best=0000000000000009aadea1119c84ddf5c08535afc167feb8ebb53bb732144a5d height=795065 log2_work=72.968446 tx=4501688 date=2017-12-29 03:48:39 progress=0.999999 cache=7.5MiB(57399txo)
2017-12-29 03:50:36 UpdateTip: new best=00000000000000151fef418aabe5ba4d9fdf1141afd164186971c7b1d30a3940 height=795065 log2_work=72.968446 tx=4501690 date=2017-12-29 03:49:34 progress=0.999998 cache=7.5MiB(57422txo)
2017-12-29 03:50:36 UpdateTip: new best=00000000000000089118b8e1b719c905452d1725cd8591fcfde7fab8168ba3b4 height=795066 log2_work=72.968511 tx=4501697 date=2017-12-29 03:50:26 progress=1.000000 cache=7.5MiB(57427txo)
This is block (795065) found by my node - ORPHAN
Why keep two blocks with the same generation time?
If you check debug.log you'll see that it's not the generation time but a time when reorg happened (more or less, time is not consistent across different nodes).
It seems that the network is divided into 2 segments.
I don't see any proofs for this on my nodes, blocks are relayed normally. Check your logs to see if there is a delay.
We can state the fact: one more p2pool has died.
Again, this has nothing to do with p2pool. Other pools suffer from this issue too. The only way to solve this imo is to ask miners to move away from few top pools and spread their hashes across smaller pools.
Here is the proof that the network is working abnormally https://chainz.cryptoid.info/dash/orphans.dws After updating the daemons on my servers, the number of connections dropped significantly until I manually added peers https://github.com/dashpay/p2pool-dash/issues/43#issuecomment-354276348
@ilsawa no, it's not imo. It's just the proof that reorgs happen much more often in last few days. But reorg in general is a legit way to resolve forking, there is nothing wrong with reorgs themselves. What is not ok is that some pools with huge power basically ignore the chain tip and mine whatever they want instead of mining on top of current tip and I think this what causes much higher rate of reorgs/orphans, not the behavior of the network.
@UdjinM6
What is not ok is that some pools with huge power basically ignore the chain tip and mine whatever they want instead of mining on top of current tip and I think this what causes much higher rate of reorgs/orphans
But how would they do that? Mess with the result of getblocktemplate
?
@UdjinM6 At the time of a sharp increase in the number of orphans, the global hashrate of p2pool did not decrease. The source code of the p2pool did not change either. Changes only touched the wallet. And as soon as the owners of the pools renewed their wallets they immediately ceased to receive normal blocks and 99% of the found blocks became orphans.
P2pool and now has enough power to find the blocks every 3 hours
Expected time to block (pool): 2h 48m 57s
I'm running 7 P2Pool Nodes with a combined HashRate of just under 7Th/s. We are seeing 90% of all found blocks "Orphaned". This is not normal and there has to be a fix for this.
Any suggestions would be appreciated!
@ourlink
How many connections does the daemon show?
dash-cli getpeerinfo | grep inbound | sort -n | uniq -c
@ilsawa I'm increasing the connections on my nodes. I originally had a limit set in dash.conf. Connections before/after - 15/58
@ourlink I'm just trying to understand and catch dependencies. But I do not see anything that could increase the number of orphans
@UdjinM6 Could you provide some insight as to whether the team is actively working on a solution?
I'm running my dash core node with modified version of 0.12.2.2 to accept peer 70206 which also being used by p2pool-dash node. After started, p2pool block 795344 is not orphaned. Need to see in longer run time if p2pool blocks orphaned rate would be decreased or not. or my node really don't related to this issue :)
@thelazier can you share your modified DASHD?
My pools have not seen a P2POOL-DASH found block since 794925 what has been found since then are all orphaned. I believe this is a P2POOL issue at this time because all the blocks found from 794925 with the exception of 4 were not found by P2Pool.
@UdjinM6 One way to improve the distribution of hashpower would be to clean up https://www.dash.org/mining. Antpool is listed at the top and the list includes sites that aren't pools at all or pools which have been dead for a long time.
@UdjinM6 One way to improve the distribution of hashpower would be to clean up https://www.dash.org/mining
And they need to change the link to p2pool.
@UdjinM6 Replace the link to the source code https://github.com/dashpay/p2pool-dash with a link to p2pool scanner http://www.p2poolmining.us/p2poolnodes/
@UdjinM6 One way to improve the distribution of hashpower would be to clean up https://www.dash.org/mining
And they need to change the link to p2pool.
@UdjinM6 Replace the link to the source code https://github.com/dashpay/p2pool-dash with a link to p2pool scanner http://www.p2poolmining.us/p2poolnodes/
I agree to those suggestions!
BTW - P2Pool-DASH just found two blocks back to back within a minute of each other. Hope this is the end to the orphans!
@ourlink , my modified dashd is just reverted commit 362becbcce6a7e50aaa88c5f28494e5e52d2abda of dashd master branch.
If this is working now, does anyone know what happened? Or just guesses still?
Turned out that some pools were actually rejecting some blocks. They restarted their nodes, so should be ok for now. We are still investigating what caused the issue exactly.
@UdjinM6 Any conclusions yet?
@UdjinM6 @ilsawa @ourlink Its happening again.
This was fixed with https://github.com/dashpay/dash/releases/tag/v0.12.2.3
unsure why now all our blocks end up orphaned since we upgraded to latest client.
2017-12-28 00:04:29 CreateNewBlock(): total size 14373 txs: 19 fees: 1231323 sigops 144 2017-12-28 00:04:30 UpdateTip: new best=000000000000000933aed4e4e968498d09c25a87a9002e712470b9ff383e01e8 height=794431 log2_work=72.927919 tx=4486957 date=2017-12-28 00:04:29 progress=1.000000 cache=5.2MiB(38802txo) 2017-12-28 00:04:30 AddToWallet 50bbb24c1142a83eb49d0298723da22b5ca622850688e5dfc067d62b2f92b1f8 new 2017-12-28 00:04:30 ProcessNewBlock : ACCEPTED
debug: http://paste.ubuntu.com/26268585/
http://zpool.ca/site/block?id=893
unsure if this is any important:
2017-12-28 00:07:24 ERROR: Requesting unset send version for node: 835. Using 209