dashpay / platform

L2 solution for seriously fast decentralized applications for the Dash network
https://dashplatform.readme.io/docs/introduction-what-is-dash-platform
MIT License
72 stars 39 forks source link

bug(dashmate) : core.log shows error 'CActiveMasternodeManager::Init -- ERROR: Could not connect to "myipaddress'' #1266

Closed qwizzie closed 1 year ago

qwizzie commented 1 year ago

After previously having problems with getting my ip address reached by Dashmate, i now have the problem that CActiveMasternodeManager can not connect to my ip address, after latest dashmate update 0.24.17 (which fixed my previous cannot read properties (ip) error and allowed me to finish the dashmate setup).

Expected Behavior

Full connection to my ip address / no error about this in the core.log

Current Behavior

CActiveMasternodeManager can not connect to my ip address

Possible Solution

I suspect the problem is currently with the dash.conf that dashmate generated and controls and does not allow me to bring changes to. I noticed the bind command is set to 0.0.0.0 instead of my ip address.
Should Dashmate not use my ip adress in a bind there ?

Current dash.conf (not editable, reverts back to below)

# general
daemon=0  # leave this set to 0 for Docker
logtimestamps=1
maxconnections=256
reindex=0

debuglogfile=/var/log/dash/core.log
logips=0

# JSONRPC
server=1
rpcuser=********
rpcpassword=*******
rpcwallet=main

rpcallowip=127.0.0.1
rpcallowip=172.16.0.0/12
rpcallowip=192.168.0.0/16

rpcworkqueue=64
rpcthreads=16

# external network
listen=1
dnsseed=0
allowprivatenet=0

externalip=********

# ZeroMQ notifications
zmqpubrawtx=tcp://0.0.0.0:29998
zmqpubrawtxlock=tcp://0.0.0.0:29998
zmqpubrawblock=tcp://0.0.0.0:29998
zmqpubhashblock=tcp://0.0.0.0:29998
zmqpubrawchainlocksig=tcp://0.0.0.0:29998
zmqpubrawchainlock=tcp://0.0.0.0:29998
zmqpubrawtxlocksig=tcp://0.0.0.0:29998

masternodeblsprivkey=*******

# network
port=9999
bind=0.0.0.0
rpcbind=0.0.0.0
rpcport=9998

core.log

2023-07-27T05:47:49Z Using data directory /home/dash/.dashcore
2023-07-27T05:47:49Z Config file: /home/dash/.dashcore/dash.conf
2023-07-27T05:47:49Z Config file arg: allowprivatenet="0"
2023-07-27T05:47:49Z Config file arg: bind="0.0.0.0"
2023-07-27T05:47:49Z Config file arg: daemon="0"
2023-07-27T05:47:49Z Config file arg: debuglogfile="/var/log/dash/core.log"
2023-07-27T05:47:49Z Config file arg: dnsseed="0"
2023-07-27T05:47:49Z Config file arg: externalip="********"
2023-07-27T05:47:49Z Config file arg: listen="1"
2023-07-27T05:47:49Z Config file arg: logips="0"
2023-07-27T05:47:49Z Config file arg: logtimestamps="1"
2023-07-27T05:47:49Z Config file arg: masternodeblsprivkey=****
2023-07-27T05:47:49Z Config file arg: maxconnections="256"
2023-07-27T05:47:49Z Config file arg: port="9999"
2023-07-27T05:47:49Z Config file arg: reindex="0"
2023-07-27T05:47:49Z Config file arg: rpcallowip="127.0.0.1"
2023-07-27T05:47:49Z Config file arg: rpcallowip="172.16.0.0/12"
2023-07-27T05:47:49Z Config file arg: rpcallowip="192.168.0.0/16"
2023-07-27T05:47:49Z Config file arg: rpcbind=****
2023-07-27T05:47:49Z Config file arg: rpcpassword=****
2023-07-27T05:47:49Z Config file arg: rpcport="9998"
2023-07-27T05:47:49Z Config file arg: rpcthreads="16"
2023-07-27T05:47:49Z Config file arg: rpcuser=****
2023-07-27T05:47:49Z Config file arg: rpcworkqueue="64"
2023-07-27T05:47:49Z Config file arg: server="1"
2023-07-27T05:47:49Z Config file arg: zmqpubhashblock="tcp://0.0.0.0:29998"
2023-07-27T05:47:49Z Config file arg: zmqpubrawblock="tcp://0.0.0.0:29998"
2023-07-27T05:47:49Z Config file arg: zmqpubrawchainlock="tcp://0.0.0.0:29998"
2023-07-27T05:47:49Z Config file arg: zmqpubrawchainlocksig="tcp://0.0.0.0:29998"
2023-07-27T05:47:49Z Config file arg: zmqpubrawtx="tcp://0.0.0.0:29998"
2023-07-27T05:47:49Z Config file arg: zmqpubrawtxlock="tcp://0.0.0.0:29998"
2023-07-27T05:47:49Z Config file arg: zmqpubrawtxlocksig="tcp://0.0.0.0:29998"
2023-07-27T05:47:49Z Using at most 256 automatic connections (1048576 file descriptors available)
2023-07-27T05:47:49Z Using 16 MiB out of 32/2 requested for signature cache, able to store 524288 elements
2023-07-27T05:47:49Z Using 16 MiB out of 32/2 requested for script execution cache, able to store 524288 elements
2023-07-27T05:47:49Z Script verification uses 7 additional threads

2023-07-27T05:48:12Z CActiveMasternodeManager::Init -- ERROR: Could not connect to ********
pshenmic commented 1 year ago

What is CActiveMasternodeManager? Is the core crashing after this message, or what exactly happening? Need more info about the error

qwizzie commented 1 year ago

Core is not crashing, it just can't seem to connect to the ip address of my Evo Node (4000 dash). It is also failing quorum initialization most likely because of this. So i suspect i will get PoSe banned at some point. core.log is old debug.log and CActiveMasternodeManager is just something that is mentioned there (no idea what it exactly is)

2023-07-27T06:11:33Z ProcessNewBlock : ACCEPTED
2023-07-27T06:11:33Z CActiveMasternodeManager::Init -- proTxHash=9669de13a19f9b17e505c7220ea91bba016a39f836d0b74dae97be2f1a52a1ec, proTx=CDeterministicMN(proTxHash=9669de13a19f9b17e505c7220ea91bba016a39f836d0b74dae97be2f1a52a1ec, collateralOutpoint=e83fda6e510d8df249c030a13744299898fde7fde936d57b4fb7f8f6072c6f23-1, nOperatorReward=0.000000, state=CDeterministicMNState(nVersion=2, nRegisteredHeight=1910400, nLastPaidHeight=0, nPoSePenalty=0, nPoSeRevivedHeight=-1, nPoSeBanHeight=-1, nRevocationReason=0, ownerAddress=XghRWfkewNVq5FW66xqgxyQpXViRKdwGoA, pubKeyOperator=abc9ec46770277b0b9c695b134ea634c499b8f763b9dfc103bb2bbaa2a2f028e40d5ba73f68c442588ea942614af2586, votingAddress=XhQNwBEdk6Y2mKyXJmxjZT5bpSMmFpfs4W, addr=145.131.29.214:9999, payoutAddress=XgPpgkYcupEAbi4xhvP4GUg7KS4TxWtqmP, operatorPayoutAddress=none)
2023-07-27T06:11:33Z CActiveMasternodeManager::Init -- Checking inbound connection to '********'
2023-07-27T06:11:38Z CActiveMasternodeManager::Init -- ERROR: Could not connect to *****:9999
2023-07-27T06:11:38Z CDKGSessionManager::InitNewQuorum -- height[1910736] quorum initialization failed for llmq_50_60 qi[0] mns[0]
2023-07-27T06:11:38Z CDKGSessionManager::InitNewQuorum -- height[1910736] quorum initialization failed for llmq_100_67 qi[0] mns[38]
2023-07-27T06:12:55Z ThreadSocketHandler -- removing node: peer=23 nRefCount=1 fInbound=1 m_masternode_connection=0 m_masternode_iqr_connection=0
2023-07-27T06:13:09Z ThreadSocketHandler -- removing node: peer=24 nRefCount=1 fInbound=0 m_masternode_connection=0 m_masternode_iqr_connection=0

So this is not related to dashmate controlled dash.conf ? That one is as it should be ?

pshenmic commented 1 year ago

It looks for me like connection issue. I checked the ports configuration of the core service, it binds local docker p2p port 9999 to the 0.0.0.0 (you can see via docker ps), could it be your firewall that is blocking it? You can test it via telnet <your ip> 9999. Check 127.0.0.1 first, and then your external ip

No, this is not related to the dash.conf

qwizzie commented 1 year ago

Is this still correct with regards to firewall settings (for an Evo node that is)

ufw allow ssh/tcp ufw limit ssh/tcp ufw allow 9999/tcp ufw logging on ufw enable

Source : https://docs.dash.org/en/stable/docs/user/masternodes/server-config.html

Or should there be more for Evo Node ?

qwizzie commented 1 year ago

$ telnet 127.0.0.1 9999 Trying 127.0.0.1... Connected to 127.0.0.1.

$ telnet myipaddress 9999 Trying **** .. Connected to myipaddress

Done directly from my server, so no problem there.

Update 1 : ufw seems to be disabled (ufw status : disabled). Working on enabling it and testing further with reboot. Update 2 : checked the ufw steps

ufw allow ssh/tcp ---> already running ufw limit ssh/tcp ---> already running ufw allow 9999/tcp --> existing rule ufw logging on --> Could not update running firewall ufw enable --> Firewall is active and enabled on system startup

So somehow ufw got disbled after reboot, even after having enabled ufw through root before (which should keep ufw enabled after reboot).

This happened to me before on a previous Ubuntu system, which i fixed in etc/rc.local (just added 'ufw enable' there) but this can't be done in Ubuntu 20.04.6 LTS that easily it seems.

Any tips / advice to keep ufw firewall enabled on Ubuntu 20.04.6 LTS even during server reboots ?

pshenmic commented 1 year ago

Ohhh, I just remembered the similar same case a long time ago in the discord. So the answer was either have ufw disabled or allow traffic from a private subnets via ufw allow from 172.16.0.0/12

However, there were issue with the local network so I am not sure

qwizzie commented 1 year ago

Problem seems solved when i enabled ufw, i just hope it wont disable itself like that again. For now i will just do a ufw status after a reboot and check if it is still enabled.

Update : actually i found a way to make sure ufw firewall stays enabled after reboots --> https://www.linuxbabe.com/linux-server/how-to-enable-etcrc-local-with-systemd

pshenmic commented 1 year ago

Sweet. I'm glad your issue is resolved