Open vrfebet opened 5 years ago
Edited to have only 1 issue per issue. You can find the former issue in #23.
So you ran reindex
first.
Did reindex create any blocks?
It would make perfect sense that index update
starts at a high block if you ran reindex
before it. So I'm not sure if you're reporting an issue (should be a new issue) or if this is just additional information for context.
So it appears that new workers aren't being created. https://github.com/uaktags/explorer/blob/434910e060c86cf59b3698508925b3998d2b64f6/scripts/sync.js#L229 is being executed properly, however it would seem that https://github.com/uaktags/explorer/blob/434910e060c86cf59b3698508925b3998d2b64f6/scripts/sync.js#L234 is not being ran.
More debugging will needed to find out where/why it's not.
Okay, it appears to be because clusterStart() doesn't get run on the new workers. That makes sense. Need to come up with a way with the new refactoring.
reindex
did not create any blocks, but then the update
started. i'm sorry but i don't have the log.
it is rather information
Actually i'm not having this issue at all in regards to new workers not being generated.
update_tx_db finished. Updating stats.Last to set 265187
worker 13836 died
There are still 7 workers
There are 1057 workers still needed
(node:880) DeprecationWarning: collection.ensureIndex is deprecated. Use createIndexes instead.
...
...
Worker [39] 880 is starting, start at index 265192 and end at index 265196
265183: ad9dc...
...
...
update_tx_db finished. Updating stats.Last to set 265192
265179: 82ac67362600e9524513e99f30bba31475e259159934688e7692ae59703f94c7
265193: 58aaa5ade9c2fd48
worker 2740 died
There are still 7 workers
265190: 7e34fe611d3bc83e
There are 1056 workers still needed
265185: 696590a2f9d6b134
update_tx_db finished. Updating stats.Last to set 265193
worker 5220 died
There are still 7 workers
There are 1055 workers still needed
My workers appear to be generating just fine. ClusterStart is only activated by the master, and the fork just continues on and only hits the needed areas on line 391 and beyond.
scriptPubKey:
{ asm: 'OP_DUP OP_HASH160 08e6ccc23513: 3c1cce2bd5001e219253f27b7212aae2f2dec7f4d8197cef31c278ad1bfd70bc
where did that come from?
Also
Workers needed: 60606. NumThreads: 4. BlocksToGet 606059. Per Worker: 10 624904 18845
shows that stats.Last was set to 18845. This was said to be a clean db, with only a reindex
ran but no outputs came. Track that down of where 18845 came from because it's not matching what's being described. Report back when you find that.
https://github.com/uaktags/explorer/issues/18#issuecomment-499155705
for now it is an hour and no worker has been lost yet
Yea, I believe the new workers not being generated has already been fixed some many commits back.
However, with it comes some unintended issues. We now get the proper workers, but what I find is they're still overloading the Daemon, the daemon starts struggling, and then in the process we get a very slow and clogged worker pool. I've seen single workers with around 1gb of ram being used and never actually get to make an RPC call because of the congestion with the daemon. I just don't think the coin's RPC Server can handle the traffic of multiple workers.
clean installation of Debian 9 node 8.16.0 npm 6.4.1 mongod 4.0.1 clean installation - git -b cluster-sync (last) coin: Elicoin
After starting
reindex
and after thatupdate
started running but probably (for the first time) from 10000 blocks synchronization stopped with announcementThere are still 0 workers
.log selection how workers disappear: