Closed dnck closed 5 years ago
Merging #91 into dev will increase coverage by
<.01%
. The diff coverage is3.57%
.
@@ Coverage Diff @@
## dev #91 +/- ##
============================================
+ Coverage 39.97% 39.98% +<.01%
- Complexity 1052 1055 +3
============================================
Files 160 160
Lines 7677 7691 +14
Branches 956 957 +1
============================================
+ Hits 3069 3075 +6
- Misses 4255 4258 +3
- Partials 353 358 +5
Impacted Files | Coverage Δ | Complexity Δ | |
---|---|---|---|
...x/hlx/service/stats/TransactionStatsPublisher.java | 30.61% <ø> (ø) |
3 <0> (ø) |
:arrow_down: |
src/main/java/net/helix/hlx/network/Node.java | 38.37% <3.57%> (-0.56%) |
46 <0> (+2) |
|
.../main/java/net/helix/hlx/TransactionValidator.java | 46.62% <0%> (-1.36%) |
20% <0%> (-1%) |
|
.../net/helix/hlx/model/persistables/Transaction.java | 98.88% <0%> (-1.12%) |
12% <0%> (-1%) |
|
...et/helix/hlx/controllers/TransactionViewModel.java | 56.18% <0%> (+0.51%) |
47% <0%> (+1%) |
:arrow_up: |
src/main/java/net/helix/hlx/crypto/Winternitz.java | 70.9% <0%> (+4.54%) |
27% <0%> (+2%) |
:arrow_up: |
Continue to review full report at Codecov.
Legend - Click here to learn more
Δ = absolute <relative> (impact)
,ø = not affected
,? = missing data
Powered by Codecov. Last update 96f3f82...8152846. Read the comment docs.
If you get the sender, you will only get the last node that sent it (not the origin or moreover which nodes it has already passed). So, this might reduce redundancy only to some extent. True, I am researching some ways to canonically name nodes and communicate this information in a compact data-structure along with packets. The general idea would be to unpack the path the received message followed along with the message itself, cache the path, then prevent the node from sending to members in the path. But, that depends on A LOT of assumptions. So, I only submit the PR for the simplest solution.
Although I would be curious to why IOTA hasn't chosen to check sender?
Me too. Admittedly, I took their "sadly" comment as silly because there's a simply solution, but on second thought, maybe they have other nefarious reasons for not doing it the way I suggest.
Regarding the refactoring: Yeah, this is fine although I would try to find better readable method names. Ok, shall I close the PR, and resubmit with better names?
And btw, has this affected the network package unit tests in anyway? Nope, everything is still passing. But, I made a comment on using seconds rather than ms for the times stored. Is there a reason for that?
There's an open PR in IRI that rewrites the entire network directory.
https://github.com/iotaledger/iri/pull/1393
One of feature added in the PR is preventing the node from sending back to the original sender:
The entire PR seems like its in the final stages of being approved.
There's an open PR in IRI that rewrites the entire network directory.
One of feature added in the PR is preventing the node from sending back to the original sender:
The entire PR seems like its in the final stages of being approved.
Can we wait until approval and then adapt their solution?
There's an open PR in IRI that rewrites the entire network directory. iotaledger/iri#1393 One of feature added in the PR is preventing the node from sending back to the original sender: https://github.com/luca-moser/iri/blob/network-rewrite/src/main/java/com/iota/iri/network/pipeline/BroadcastStage.java The entire PR seems like its in the final stages of being approved.
Can we wait until approval and then adapt their solution?
The PR is currently merged into the iota dev branch. It is your call whether we port from their dev, master, or ignore. But, for now, ignoring my small changes seems fine to me.
going to close this temporarily, as there is already a solution in works that we can hopefully adapt soon.
This pull request makes a minor functional change to the Node class method,
spawnBroadcasterThread
, and it refactors the Node class method,preProcessReceivedData
method.Functional change to spawnBroadcasterThread The legacy code had a note that said the Node "sadly" rebroadcasts transactions to the original sender of the transaction. The proposed changed here checks the transaction view model sender attribute before sending a transaction in the thread. If there is a match between the recorded sender, and the neighbor that will be sent to, the node does not send the transaction. If there is not a match, the node still sends. The method was checked in a small test net, and works as intended.
Refactor on the preProcessReceivedData method
This method was very long. It has been refactored into several methods to enhance readability. The new methods are,
getIndexPlusOneOfNodeInNeighborsIfNotExistsZero
putDigestAndTxvmHashPairInNodesRecentSeenBytesCache
recomputeRecentSeenBytesStatistics
addNewNeighborIfTestNetAndMaxPeersNotZero
As this is a refactor, the function of the method has been preserved.
The control flow of the function is as follows:
The Node checks if the sender address is in its list of Neighbors with the new,
getIndexPlusOneOfNodeInNeighborsIfNotExistsZero
method, which returns 0 if the sender is not in the list, and the index of the Neighbor + 1, if the sender is a neighbor.If the address of the sender matches, and P_DROP_TRANSACTION from the BaseHelixConfig is not greater than some random value, then the Node will take the following steps in order:
Check that the transaction has not been recently cached, and if not cached, validate the transaction, and use the new synchronized method,
putDigestAndTxvmHashPairInNodesRecentSeenBytesCache
to put the transaction into the cache.After, the node adds the transaction to the receive queue for eventual storage, and checks on the final 32 bytes of the received data to determine whether there is a request for a specific transaction or a random tip. This functionality has been preserved.
The final two extracted methods recompute the cache statistics, and, in the case of an address mismatch, add the incoming sender to the list of neighbor's under some conditions given in the configuration of the node.