Closed sarvalabs-karthik closed 1 month ago
End users submit transactions via REST endpoints on API nodes, not via gossip network directly. Transactions are validated there against the ledger state before getting sent over gossip network to relays.
Relays have adaptive rate limiters to deal with "spam" being sent directly over the gossip.
I agree that there is no issue if tx is submitted through RPC. In case of p2p network, there is an app rate limiter which can be bypassed by firing txns with app ids ranging from 1 to 2^64. But the issue is that, all the nodes in the network keep propagating the messages eventhough they are invalid. In case of Ethereum, each node gossips or broadcasts the txn after full validation.
In simple words, I would modify my algorand node code to send invalid txns in p2p publishing.
Transaction costs are so low on Algorand that it is easier to spam the network with valid TXNs - no code modifications needed. The current hub and spoke topology handles x100 spikes just fine.
But P2P implementation is not yet complete so this is a good time for feedback.
I have a question regarding the deduplication process for raw transaction groups. I noticed that a rotating salted cache is used instead of a digest cache. Could you please explain the reasoning behind this choice? Specifically, I am interested in understanding the advantages or considerations that led to the decision to use a rotating salted cache for this purpose.
Hello @algorandskiy ,
I am tagging you here as you might be the right person to clear my doubt regarding the tx-related salted cache.
There is a PR where the deduplication for wsnet handler was introduced: https://github.com/algorand/go-algorand/pull/4806 Blake2 hashing function selected because of its speed, rotating salt added to increase randomness since blake2 is relatively new and might (or might not) have collisions.
Thank you for bringing the pubsub issue to attention. It is indeed an problem and needs to be addressed.
After reviewing the Algorand code, I noticed that transactions are broadcasted to other peers using a pub-sub mechanism without immediate validation. Wouldn’t this approach allow someone to send invalid transactions, leading to high bandwidth consumption?