Closed gufmar closed 1 year ago
also getting the same:
[dell:cardano.node:Info:1381658] [2021-01-15 13:08:55.00 UTC] fromList [("time(ps)",Number 1.098036535310466e18),("kind",String "MeasureTxsTimeStart"),("mempoolNumBytes",Number 2381.0),("mempoolNumTxs",Number 5.0)]
[dell:cardano.node.Mempool:Info:1381658] [2021-01-15 13:08:55.00 UTC] fromList [("tx",Object (fromList [("txid",String "txid: TxId {_unTxId = \"e7f6cbd91f29fd370a3a378dfefcf47822b2ddbc89e0ba376aa5b434509fdb1a\"}")])),("kind",String "TraceMempoolAddedTx"),("mempoolSize",Object (fromList [("numTxs",Number 5.0),("bytes",Number 2381.0)]))]
[dell:cardano.node.Mempool:Info:1395172] [2021-01-15 13:08:55.02 UTC] fromList [("tx",Object (fromList [("txid",String "txid: TxId {_unTxId = \"e7f6cbd91f29fd370a3a378dfefcf47822b2ddbc89e0ba376aa5b434509fdb1a\"}")])),("kind",String "TraceMempoolRejectedTx"),("mempoolSize",Object (fromList [("numTxs",Number 5.0),("bytes",Number 2381.0)])),("err",Object (fromList [("kind",String "BadInputsUTxO"),("error",String "The transaction contains inputs that do not exist in the UTxO set."),("consumed",Number 0.0),("badInputs",Array [Array [String "014b1a856ea20359ea4d8b87172b39bc5b033e30c28257377e40977cb9db093f",Number 1.0],Array [String "d98f7b1fc16c635184007249adae5d163378089856991a4a084680cd460e31d0",Number 0.0]]),("produced",Number 6.40548051e8)]))]
Hi,
I'm running 1.25.1 and I can too still see this issue
"error",String "The transaction contains inputs that do not exist in the UTxO set."
With full log entry
[cardano-:cardano.node.Mempool:Info:2524] [2021-02-16 08:56:45.45 UTC] fromList [("tx",Object (fromList [("txid",String "txid: TxId {_unTxId = \"64d053bdff7cddc3e5a79f20c9e82223c72503ac22459cf78336357a5706b8f8\"}")])),("kind",String "TraceMempoolRejectedTx"),("mempoolSize",Object (fromList [("numTxs",Number 22.0),("bytes",Number 11407.0)])),("err",Object (fromList [("kind",String "BadInputsUTxO"),("error",String "The transaction contains inputs that do not exist in the UTxO set."),("consumed",Number 0.0),("badInputs",Array [Array [String "2ce2e3810eac86908668f9ad5b6af27e3a95e5295ae6e8973e4bfa1a195154ce",Number 0.0]]),("produced",Number 8.77549511e8)]))]
I can also see the following:
"error",String "Staking credential already registered"
With full log entry
[cardano-:cardano.node.Mempool:Info:2454] [2021-02-16 08:56:32.89 UTC] fromList [("tx",Object (fromList [("txid",String "txid: TxId {_unTxId = \"bec98f2fcb4d11b45dcca9a1cc71179be4661cf037e9185a6ba871bf4a8e1225\"}")])),("kind",String "TraceMempoolRejectedTx"),("mempoolSize",Object (fromList [("numTxs",Number 19.0),("bytes",Number 10290.0)])),("err",Object (fromList [("kind",String "StakeKeyAlreadyRegisteredDELEG"),("error",String "Staking credential already registered"),("consumed",Number 0.0),("credential",String "KeyHashObj (KeyHash \"a095a896b660fa0327e7a1530aa44bfd318ecfd1ddf8a0a755e8014b\")"),("badInputs",Array [Array [String "a6279e7a741f98e1e0c7d898d9a2292b7424ba154319816f5eddc0c79fa91ecd",Number 4.0]]),("produced",Number 2.99e8)]))]
Some additional details:
I have two pre-production nodes that I just added topologyUpdater to last night. This morning I found both Relays AND the BP full of these mempool info messages. Want to get my head around it before we bring these servers into production and pledge them.
I have two pre-production nodes that I just added topologyUpdater to last night. This morning I found both Relays AND the BP full of these mempool info messages. Want to get my head around it before we bring these servers into production and pledge them.
Weird, I only get this from one relay, BP and the other relay are empty. Would you mind sharing your topology files for all the three of them?
I have two pre-production nodes that I just added topologyUpdater to last night. This morning I found both Relays AND the BP full of these mempool info messages. Want to get my head around it before we bring these servers into production and pledge them.
Weird, I only get this from one relay, BP and the other relay are empty. Would you mind sharing your topology files for all the three of them?
Sure... appreciate any observations you may have.
`Block Producer: { "Producers": [ { "addr": "10.111.1.222", "port": 3001, "valency": 1 }, { "addr": "10.112.1.222", "port": 3001, "valency": 1 } ] }
Relay 1 { "resultcode": "201", "networkMagic": "764824073", "ipType":4, "Producers": [ { "addr": "10.110.1.111", "port": 3001, "valency": 1 } , { "addr": "10.112.1.222", "port": 3001, "valency": 1 } , { "addr": "19.189.249.24", "port": 7227, "valency": 1 } , { "addr": "56.209.156.214", "port": 7227, "valency": 1 } , { "addr": "kiblocrelays.asuscomm.com", "port": 8081, "valency": 1, "distance":190, "continent":"NA", "country":"US", "region":"VA" }, { "addr": "135.125.132.102", "port": 7001, "valency": 1, "distance":551, "continent":"NA", "country":"FR", "region":"NJ" }, { "addr": "104.131.15.193", "port": 6001, "valency": 1, "distance":589, "continent":"NA", "country":"US", "region":"NY" }, { "addr": "68.200.198.205", "port": 3001, "valency": 1, "distance":1017, "continent":"NA", "country":"US", "region":"FL" }, { "addr": "35.239.129.200", "port": 6000, "valency": 1, "distance":1600, "continent":"NA", "country":"US", "region":"IA" }, { "addr": "44.241.117.88", "port": 64137, "valency": 1, "distance":3561, "continent":"NA", "country":"US", "region":"OR" }, { "addr": "157.245.101.155", "port": 6001, "valency": 1, "distance":3844, "continent":"NA", "country":"IN", "region":"OR" }, { "addr": "37.59.183.1", "port": 6002, "valency": 1, "distance":6126, "continent":"EU", "country":"FR", "region":"ENG" }, { "addr": "relaynode3.bravostakepool.nl", "port": 3001, "valency": 1, "distance":6412, "continent":"EU", "country":"NL", "region":"ZH" }, { "addr": "51.195.91.118", "port": 3001, "valency": 1, "distance":6695, "continent":"EU", "country":"GB", "region":"SL" }, { "addr": "207.154.204.57", "port": 6000, "valency": 1, "distance":6768, "continent":"EU", "country":"DE", "region":"HE" }, { "addr": "cardano-relay2.kysenpool.io", "port": 3002, "valency": 2, "distance":6949, "continent":"EU", "country":"DE", "region":"BY" }, { "addr": "161.97.78.100", "port": 4002, "valency": 1, "distance":7054, "continent":"EU", "country":"DE", "region":"BY" }, { "addr": "relays.cardano-mainnet.uniquestaking.com", "port": 3001, "valency": 2, "distance":7457, "continent":"SA", "country":"BR", "region":"SP" }, { "addr": "ada-relay-001.cardaknow.io", "port": 3001, "valency": 1, "distance":12827, "continent":"AS", "country":"TW", "region":"CYI" }, { "addr": "180.150.49.218", "port": 3001, "valency": 1, "distance":16190, "continent":"OC", "country":"AU", "region":"VIC" } ] }
Relay 2 { "resultcode": "201", "networkMagic": "764824073", "ipType":4, "Producers": [ { "addr": "10.110.1.111", "port": 3001, "valency": 1 } , { "addr": "10.111.1.222", "port": 3001, "valency": 1 } , { "addr": "19.189.249.24", "port": 7227, "valency": 1 } , { "addr": "56.209.156.214", "port": 7227, "valency": 1 } , { "addr": "uswest2relay4ms.azureada.com", "port": 3001, "valency": 1, "distance":5, "continent":"NA", "country":"US", "region":"WA" }, { "addr": "35.188.36.111", "port": 3001, "valency": 1, "distance":1102, "continent":"NA", "country":"US", "region":"CA" }, { "addr": "35.192.181.3", "port": 3001, "valency": 1, "distance":2006, "continent":"NA", "country":"US", "region":"IA" }, { "addr": "missouri.cardanode.io", "port": 6000, "valency": 1, "distance":2569, "continent":"NA", "country":"US", "region":"MO" }, { "addr": "paris-pool-eiffel.ddns.net", "port": 3003, "valency": 1, "distance":3306, "continent":"NA", "country":"DE", "region":"GA" }, { "addr": "198.251.75.12", "port": 6000, "valency": 1, "distance":3603, "continent":"NA", "country":"US", "region":"PA" }, { "addr": "157.230.84.148", "port": 6000, "valency": 1, "distance":3674, "continent":"NA", "country":"US", "region":"NY" }, { "addr": "relay3.nordicpool.org", "port": 3002, "valency": 1, "distance":7552, "continent":"EU", "country":"SE", "region":"AB" }, { "addr": "94.212.121.53", "port": 6001, "valency": 1, "distance":7740, "continent":"EU", "country":"NL", "region":"GR" }, { "addr": "78.23.133.211", "port": 3022, "valency": 1, "distance":7863, "continent":"EU", "country":"BE", "region":"BRU" }, { "addr": "eur2.limepool.io", "port": 6000, "valency": 1, "distance":8058, "continent":"EU", "country":"DE", "region":"BE" }, { "addr": "51.68.174.136", "port": 6000, "valency": 1, "distance":8119, "continent":"EU", "country":"FR", "region":"SL" }, { "addr": "cardano-relay2.kysenpool.io", "port": 3001, "valency": 2, "distance":8280, "continent":"EU", "country":"DE", "region":"BY" }, { "addr": "161.97.78.100", "port": 4002, "valency": 1, "distance":8408, "continent":"EU", "country":"DE", "region":"BY" }, { "addr": "asia-2.pool-overflow.com", "port": 3000, "valency": 1, "distance":10566, "continent":"AS", "country":"HK", "region":"NTW" }, { "addr": "125.168.19.192", "port": 6000, "valency": 1, "distance":13312, "continent":"OC", "country":"AU", "region":"VIC" } ] }`
From my understanding this just means that the same transaction is received several times in quick succession from different peers and of course only the first transaction is valid and subsequent ones are seen as double spends.
That is fine. But it feels weird that the cpu spikes so much. And why only one of the two relays.
Not sure if the fact I'm running this in kubernetes might influence it..
On Sun 21 Feb 2021, 08:33 John Wellesz, notifications@github.com wrote:
From my understanding this just means that the same transaction is received several times in quick succession from different peers and of course only the first transaction is valid and subsequent ones are seen as double spends.
— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/input-output-hk/cardano-node/issues/2020#issuecomment-782820234, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAFLRDTNGNJILBAJV6JAED3TADANJANCNFSM4S7TURFQ .
That is fine. But it feels weird that the cpu spikes so much. And why only one of the two relays. Not sure if the fact I'm running this in kubernetes might influence it.
It's not k8s, I'm experiencing the same behaviour on barebone machines. Setup with a BP node with 2 relays one having a significantly higher load.
I guessed so. thanks for confirming,
I eventually turned off the "TraceMempool": false, and this made a HUGE difference in terms of CPU usage.
While it's true I lose TX traces, it's also true I have them on on the BP, so it's not really a big deal. On the flip side, CPU is seeing 50% of the load, and on Raspberry Pi this is crucial
Il giorno mer 24 feb 2021 alle ore 16:47 sbacelic notifications@github.com ha scritto:
That is fine. But it feels weird that the cpu spikes so much. And why only one of the two relays. Not sure if the fact I'm running this in kubernetes might influence it.
It's not k8s, I'm experiencing the same behaviour on barebone machines. Setup with a BP node with 2 relays one having a significantly higher load.
— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/input-output-hk/cardano-node/issues/2020#issuecomment-785213314, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAFLRDWNGDTEPRG2AHDSZEDTAUUSNANCNFSM4S7TURFQ .
Does anyone have any update to this? I am still seeing this in my bp logs but it does not seem to affect block production. Nevertheless, it may be because my servers are of high specs. This extra logging may cause unnecessary load for servers with less CPU/Memory and slower disks. Suggest removing this logging of rejection if the transaction is already logged and make it optional which is only available if you start cardano under a debug mode.
Closing this. If this is still relevant please re-open.
External
Summary The following log sample is from a block producer fed by 3 relays. It shows multiple messages with a description
BadInputsUTxO
...TraceMempoolRejectedTx
...The transaction contains inputs that do not exist in the UTxO set
This is somewhat misleading, because after each new incoming transmission it is logged for every other same transmission coming from the other connected peers. Log files from relay nodes with 20 remote peers are blown up with ~15-19 such "bad" messages for every processed transaction.
Turns out the TX is effectively included in the block and there is no functional issue. Just a lot of complaints about already known TXs and might a multiple (resource intense?) checking of valid inputs, instead of checking tx hash against already verified TXs waiting in mempool.
Steps to reproduce grep Logfiles for
BadINputsUTxO
orTraceMempoolRejectedTx
at"TracingVerbosity": "NormalVerbosity"
"minSeverity": "Info"
Expected behavior Either don't mention ignored because already known TXs in the logs at all, or don't let it look like a "bad" "not found" "rejected" event.
System info (please complete the following information):