Closed solo111 closed 1 year ago
This is another scamer 20.203.254.151 https://chainz.cryptoid.info/dash/address.dws?XghG5zKsUsV3tvk8rTRZfbcWnpXzczWBBH.htm
That's just multiple masternodes paying to the same address:
$ dash-cli masternodelist full | grep XcgAMaQahByZvfcKLU4T37gVy2nwuQpwLE
"6ce31017706dd5992a84c3acce4321f30f6b9255a72f1da3badf5695526530e1-0": " ENABLED 0 XcgAMaQahByZvfcKLU4T37gVy2nwuQpwLE 1685850247 1881656 168.119.87.145:9999",
"dbe7a7d8cb9f65c3017372584d2126af6149e2556591bd269fa146285394f274-1": " ENABLED 0 XcgAMaQahByZvfcKLU4T37gVy2nwuQpwLE 1685521629 1879580 188.40.180.133:9999",
"b70023e143c4d2cf6e887baf4f9e3e0f2c515763b71cfdae0440f5a53ca05da6-0": " ENABLED 0 XcgAMaQahByZvfcKLU4T37gVy2nwuQpwLE 1685843046 1881622 188.40.163.19:9999",
"02b7de14fc33315a96259c0fa8e0f92e01d78b4bee96b8acba45b3da09124e0d-0": " ENABLED 0 XcgAMaQahByZvfcKLU4T37gVy2nwuQpwLE 1686010224 1882668 168.119.80.11:9999",
"1f696338b394a13219fb28f105ca5ad2b200edd214a595285d671caed1545694-0": " ENABLED 0 XcgAMaQahByZvfcKLU4T37gVy2nwuQpwLE 1685843354 1881625 178.63.121.145:9999",
"8f6de1eec25ac023e8d9f744ed8b28b5e9e5f69f4a7ad8ac7a6af7cce1ec496a-0": " ENABLED 0 XcgAMaQahByZvfcKLU4T37gVy2nwuQpwLE 1685847532 1881634 188.40.185.138:9999",
"701921e7f7178482c939508097d9b99d2a275af382febe8ca18b06715b3aac25-0": " ENABLED 0 XcgAMaQahByZvfcKLU4T37gVy2nwuQpwLE 1685847939 1881635 188.40.231.6:9999",
"27a499711ff919ecc209ba197d02e841b70501e244d7cdd8dc4d7907ee562722-1": " ENABLED 0 XcgAMaQahByZvfcKLU4T37gVy2nwuQpwLE 1685958508 1882347 188.40.190.45:9999",
"570637623b1290fe9c072db4f05f38567e41509dbbba3940549adc603db62d1e-0": " ENABLED 0 XcgAMaQahByZvfcKLU4T37gVy2nwuQpwLE 1685848101 1881638 136.243.115.130:9999",
"ee245a29fac792226aac6e429b9056ec63b9b134bcfc67e691964b2a5f4bde8b-0": " ENABLED 0 XcgAMaQahByZvfcKLU4T37gVy2nwuQpwLE 1685870420 1881788 168.119.87.135:9999",
"1475f536f8d578d52495667f521e1a89100ff8390cf7c5535aec9ff882146782-0": " ENABLED 0 XcgAMaQahByZvfcKLU4T37gVy2nwuQpwLE 1685850703 1881660 168.119.80.13:9999",
"b0a0b3a49bdb8af7c926d426e45bd288c827584bbd3cfbf07a0a3a0342b7a565-1": " ENABLED 0 XcgAMaQahByZvfcKLU4T37gVy2nwuQpwLE 1685918284 1882091 46.4.217.236:9999",
"f919a4f2298da307273b4fcf2a08226dffd286b43dc3e3719b26d6e407afd978-0": " ENABLED 0 XcgAMaQahByZvfcKLU4T37gVy2nwuQpwLE 1685849179 1881648 188.40.185.135:9999",
"05b5d48546677bfde499e1f33dd364391b49faddd039a3473af744ae34fa4270-0": " ENABLED 0 XcgAMaQahByZvfcKLU4T37gVy2nwuQpwLE 1685848195 1881640 95.216.84.37:9999",
$ dash-cli masternodelist full | grep XghG5zKsUsV3tvk8rTRZfbcWnpXzczWBBH
"3aeae7e48de22012427ccb6338ab1cb113ae71919e3c899c6fd894830c081d36-1": " ENABLED 0 XghG5zKsUsV3tvk8rTRZfbcWnpXzczWBBH 1685819834 1881470 20.203.254.151:9999",
"4c3c2370b82a15df283451a85e1b27ea941572fa59ce02e480dc37b2b27b1a4c-1": " ENABLED 0 XghG5zKsUsV3tvk8rTRZfbcWnpXzczWBBH 1685819249 1881468 20.208.41.235:9999",
"4897933e339fa84b449a0fff2e4772531d30b4b5638168a68afd58028904f074-1": " ENABLED 0 XghG5zKsUsV3tvk8rTRZfbcWnpXzczWBBH 1685816280 1881445 20.208.42.155:9999",
"432ec69f9f399a446ce29f8c2b6be001ddc7a9bbcb1b34965e503281ec807811-1": " ENABLED 0 XghG5zKsUsV3tvk8rTRZfbcWnpXzczWBBH 1685819737 1881469 20.203.248.68:9999",
"f536049e79a053f9c9df76296e8d0e42a12d670a0693d8cd9e4f2537168796ca-1": " ENABLED 0 XghG5zKsUsV3tvk8rTRZfbcWnpXzczWBBH 1685819104 1881466 20.208.46.210:9999",
"0b88e632fa947f5dc4c1f05657020719400711294e09ea3470e93885b2e99105-1": " ENABLED 0 XghG5zKsUsV3tvk8rTRZfbcWnpXzczWBBH 1685818836 1881463 20.203.248.233:9999",
"2fbb191f1b1d530399fe9ac4dc82ce99c71a4a430d9eeec3bcea5b9b8f548844-1": " ENABLED 0 XghG5zKsUsV3tvk8rTRZfbcWnpXzczWBBH 1685815694 1881444 20.208.42.232:9999",
We track masternodes by their protx hashes, not by their payout addresses. It's completely fine to have one payout address for many MNs from technical perspective. Not a very good practice privacy wise though cause anyone can group your masternodes like that but if you don't care about that then 🤷♂️ .
that's ok but how i got those addresses...My masternode experiences very strange spontaneous freezes and i decided to monitor with nethogs the network activity. What i saw after 1 day with sudo nethogs -v3
:
PID USER PROGRAM DEV SENT RECEIVED
? root :60126-168.119.80.11:9999 0.049 0.008 MB
? root :38308-168.119.80.11:9999 0.049 0.008 MB
? root :56386-168.119.80.11:9999 0.049 0.007 MB
? root :44118-168.119.80.11:9999 0.048 0.007 MB
? root :36772-168.119.80.11:9999 0.048 0.007 MB
? root :42066-168.119.80.11:9999 0.048 0.007 MB
? root :33076-176.9.210.13:9999 0.049 0.005 MB
? root :56818-168.119.80.11:9999 0.049 0.005 MB
? root :60236-168.119.80.11:9999 0.048 0.005 MB
? root :59844-185.125.188.54:443 0.001 0.004 MB
? root :47932-168.119.80.11:9999 0.002 0.004 MB
? root :36156-168.119.80.11:9999 0.001 0.003 MB
? root :50116-168.119.80.11:9999 0.001 0.003 MB
? root :54144-168.119.80.11:9999 0.001 0.003 MB
? root :56160-168.119.80.11:9999 0.001 0.003 MB
? root :59662-176.9.210.13:9999 0.001 0.003 MB
? root :35716-168.119.80.11:9999 0.001 0.003 MB
? root :41138-168.119.80.11:9999 0.001 0.003 MB
? root :35916-168.119.80.11:9999 0.001 0.003 MB
? root :46108-168.119.80.11:9999 0.001 0.003 MB
PID ?, user root, program unknown. And the same with ip 20.203.254.151. What is this? Maybe someone uses some dash bug?
Masternodes probe each other (deterministically) from time to time so it's also ok to have short-living connections with almost no data transferred. "Process name ?, user root, program unknown" simply means that this connection is waiting to be closed and I believe nethogs
saves it as the last known state for connections that are gone already when you ask it to show you the accumulated data. Try using netstat -ant
to confirm these connections are actually gone.
re spontaneous freezes: could be that you are running out of RAM and it starts using swap or maybe it's when we flush data to disk to free some internal cache. In both cases it could take noticeable time if your disk is slow.
Thanx a lot! Now i'm calm but another question is. Until Dash 19.1 it was very rarely to see ERROR
in log but now it is very often i see :
grep ERR debug.log
2023-06-05T05:05:05Z ERROR: AcceptBlockHeader: prev block not found
2023-06-05T09:31:37Z ERROR: AcceptBlockHeader: prev block not found
2023-06-05T15:03:56Z ERROR: AcceptBlockHeader: prev block not found
2023-06-05T16:25:00Z ERROR: AcceptBlockHeader: prev block not found
2023-06-05T17:35:29Z ERROR: AcceptBlockHeader: prev block not found
2023-06-05T17:39:23Z ERROR: AcceptBlockHeader: prev block not found
2023-06-05T18:08:11Z ERROR: AcceptBlockHeader: block 00000000000000318dd8ec152a49c221d14d23f035e3e17935111cda19c25d6a is marked conflicting
2023-06-06T01:01:47Z ERROR: AcceptBlockHeader: prev block not found
2023-06-06T08:34:32Z ERROR: AcceptBlockHeader: prev block not found
2023-06-06T09:40:37Z ERROR: AcceptBlockHeader: prev block not found
2023-06-06T13:30:28Z ERROR: FindTx: Deserialize or I/O error - ReadCompactSize(): size too large: iostream error
2023-06-06T13:34:38Z ERROR: FindTx: Deserialize or I/O error - ReadCompactSize(): size too large: iostream error
2023-06-06T13:42:03Z ERROR: FindTx: Deserialize or I/O error - ReadCompactSize(): size too large: iostream error
2023-06-06T13:55:30Z ERROR: FindTx: txid mismatch
2023-06-06T13:58:02Z ERROR: FindTx: Deserialize or I/O error - non-canonical ReadCompactSize(): iostream error
2023-06-06T14:02:10Z ERROR: FindTx: Deserialize or I/O error - ReadCompactSize(): size too large: iostream error
2023-06-06T14:02:32Z ERROR: FindTx: Deserialize or I/O error - ReadCompactSize(): size too large: iostream error
2023-06-06T14:05:38Z ERROR: FindTx: Deserialize or I/O error - ReadCompactSize(): size too large: iostream error
2023-06-06T14:06:48Z ERROR: FindTx: Deserialize or I/O error - ReadCompactSize(): size too large: iostream error
2023-06-06T14:07:32Z ERROR: FindTx: Deserialize or I/O error - ReadCompactSize(): size too large: iostream error
AcceptBlockHeader
: these errors are most likely caused by nodes that stuck on an invalid fork. I see multiple prev block not found
in debug.log on my node too, that's ok-ish.
FindTx
: not sure about this one tbh but it's related to txindex. I guess txindex db is corrupted on your node (don't see anything like that on my nodes) and you'd have to reindex to fix it. I recommend waiting for your mn to receive its next reward and reindex soon after that to avoid missing payments due to mn downtime (should take an hour or two). Check you mn after maybe half a day and make sure it wasn't pose-banned while it was reindexing (note: pose-bans aren't instant).
May I completely disable txindex on masternode with -txindex 0
key to see if ERROR appears on MN?
No, masternodes require txindex to be able to fully verify IS/CL/governance. I think you can avoid reindexing by (re)moving indexes/txindex
folder (stop your node before doing this!) - it should sync txindex data in background next time you start your node. Should take 5-10 minutes depending on your specs. However, I only tried that on a non-MN node so I can't guarantee results.
Closing as questions have been answered.
yes, problem is solved but a bit another way. At first i've deleted indexes
folder as was adviced and a node started to work as a MN successfully while txindexing but at one moment it stuck at corrupted blk00153.dat. I replaced it with the same one from my local machine. Then went blk00154.dat corrupted and so on. I decided to replace all my MN blocks (from 154), indexes, evodb and chainstate with those from my local machine. Now there's no Deserialize or I/O error - ReadCompactSize(): size too large: iostream error
records in the log
How can this masternode for example get rewarded 3-5 times per hour? ip is 168.119.80.11 https://chainz.cryptoid.info/dash/address.dws?XcgAMaQahByZvfcKLU4T37gVy2nwuQpwLE.htm