Closed phelixbtc closed 10 years ago
The same goes for getmemorypool.
In CreateNewBlock there is this line which looks slowish:
if (!txPrev.ReadFromDisk (dbset.tx (), txin.prevout, txindex))
Bitcoin has put some effort into this with ultraprune: 2ec349bc420d7f4541bf91acf8b830377a1421f3 450cbb0944cd20a06ce806e6679a1f4c83c50db2
I will try whether -dbcache=1000 improves things. edit: no / not enough
Looks like one of those things where one can (possibly) improve things by reducing disk accesses. I've done lots of these alrady (partially for Huntercoin) and can take a look whether it is possible here as well when I'm back from vacation on Saturday. If someone can fix it before that, I'm of course happy about it, too. ;)
It seems we are calculating priority for the same TX anew with every createNewBlock. Maybe it would help to only do that once for every TX.
With a large number of inputs we don't need to look at each input, we could only take a fraction (1/10th) of randomly selected inputs.
Loading the previous transactions is probably the bottle-neck - it leads to reading blocks randomly from disk, which is quite slow. (This is/was also the bottle-neck for other things I optimised already.) While I've not yet looked at the precise code in question, I guess that this is something that can be fixed by implementing UTXO-based tx verification (which is already done for HUC and could be ported to NMC). The best "quick fix" would be to consider transactions with too many inputs (>100?) as non-standard. Except for things like these aggregations occuring currently, it shouldn't be noticable and it should help to avoid such large processing times (at least a bit).
From some profiling it seemed to me that also the signature verification is taking a relatively long time. I think it is done again and again for each TX? Maybe it could be stored. Also txindex.GetDepthInMainChain() is slowing things down, especially with large mempools and TXs with lots if inputs. Most of the aggregator TXs have only a hundred inputs (some have 150). Also having to deal with smaller but even more transactions would not help so much. IMHO we need to optimize/limit the work that is being done for creating a block.
For limiting: We could limit the total number of tx inputs that are put into a block at all. But this could lead to complete DoS preventing other transactions to be confirmed at all if we don't properly balance the selection algorithm.
Regarding performance: I think that you are right, there's probably duplication going on that we can take advantage of. Signatures should already be checked (and also the prev txouts loaded) when the tx is put into the memory pool, so for creating a block from it we could actually skip it (and calculate priority already while putting the tx into mempool and cache it). I'll try to get a patch ready for this today (hopefully).
I`m using MM on P2pool, and see in namecoin log:
Trying to clean up 0 async RPC call threads...
but after some time (sometimes a hour, sometimes few days) i see in P2poll that getauxblock took more than 5 sec and daemon is not responding at all. I need to sigquit daemon and start it over. Not see any excess ram or cpu usage when it is "hanging", there is no error in log.
It is possible we are barking up the wrong tree here. I got a lock up also with only 11 (some of them large, 27kb) tx. It seems that after a ReadFromDisk error there is always a delay of 1 minute before things go on. (using -logtimestamps option).
Is it being provoked by the large tx (maybe only the 27kb one)?
08/26/14 05:28:50 ERROR: CBlock::ReadFromDisk() : OpenBlockFile failed 08/26/14 05:29:53 received: addr (31 bytes)
08/26/14 05:32:11 ERROR: CBlock::ReadFromDisk() : OpenBlockFile failed 08/26/14 05:33:12 received: inv (73 bytes)
08/26/14 05:33:12 ERROR: CBlock::ReadFromDisk() : OpenBlockFile failed 08/26/14 05:34:13 received: inv (37 bytes)
edit: Could be that the large delay just happened to occurred after this error.
I have two nodes getting high cpu load (up to 100% of a single core for prolonged periods) & higher than usual ram use (319mb on one 265 on the other, normal is 90-120mb), with the latest merged patches. I'm getting lots of these errors in debug.log
received: tx (27055 bytes) ERROR: AcceptToMemoryPool() : not enough fees received: tx (18045 bytes) ERROR: AcceptToMemoryPool() : not enough fees
@phelixbtc: Does this happen (only) with my latest patch, or (also) without it? Can you retrieve a backtrace from GDB, where it happens?
@John-Kenney: I think the higher RAM usage is part of the "fix" (although I'm not sure whether it should triple compared to without the patch). The "not enough fees" error is probably due to having changed the relay fee (but this is just guessing). In any case, it shouldn't be a "critical" error.
The higher ram & cpu use seems to be when it's processing all these transactions, I was seeing similar high ram use just before the recent patches, it seems to have not fixed the problem. It has helped, both nodes keep settling down after a while, but it's still taking too long.
It's quite an important problem if it's slowing down miners & makes running a node more unreliable and expensive. I think the recent patches have helped, but not cured the problem.
The patches should not help with RAM usage (CPU a bit, but that's also not where I see their main advantage). What they should improve is disk access during CreateNewBlock - which should not affect ordinary nodes, only miners. RPC calls like getwork
and getauxblock
should be faster now. I don't know if they are "fast enough", though.
@domob1812 Note the second patch is not merged yet. @John-Kenney the current namecoinq branch may be using a lot of RAM (up to crashing at 2GB). Domob has already supplied an improved version we are still testing.
Has anybody tried yet how long it actually takes to verify a 18kb or 27kb tx?
Solved as of 0.3.76 - closing.
With large blocks like 192362 getauxblock can be very slow to return.