Closed ghost closed 8 years ago
I am seeing this across a number of servers. Currently my (8) node server has 1580 connections from 37.35.96.x.
A quick netstat -n shows connections from 13, 15, 23, 21, 19, 17, etc. See details in the attached file. storj.txt
For reference, ports 4000-4009 are node 0, 4010-4019 node 1, ... 4070-4079 node 7. All ports are DNATed on the firewall. 10.184.155.136 is the internal address of the server. uspoe.carpenter-farms.us is the external public address (66.128.118.36) /* Recently changed firewalls to isolate this traffic */
Now server22 same story:
netstat -n|grep 37.35.96.22 |wc -l
609
...and groving
Those servers belong to @th3van and he is running a script that pumps data into the network.
I'd like to keep that going so we can determine how to better distribute contract publications from busy bridges like that.
Those scripts are costing us dearly. My server (8 nodes, 8TB) is at 3245 connections this morning. That cannot be good since I've only accumulated 104G of shards. At this rate I'll run out of connection ports before reaching 8TB of data.
Per Deadgray's experience with iptable blocking of those machines, this is also costing us huge amounts of CPU time. In my case, it really doesn't matter, its just costing me some alt-coin mining cycles, but for most, driving the CPU to 100% is not going to be acceptable. Disk space is one thing, threatening system performance is another altogether. However, I'd much prefer to see those cycles spent negotiating new contracts rather than in managing thousands of port connections.
For what its worth, I'm routing my server through a dedicated firewall right now so that I can monitor network bandwidth utilization. I'm typically seeing 5-20 Mbps, a rather small fraction of the 200 Mbps bandwidth I have available. This is reflected in the rather trivial amount of shards I've collected to date.
I'm fully aware of the implications. But turning off the scripts does not solve the problem. The network needs to be able to handle these scenarios, otherwise in a production network this attack could be done intentionally. Leaving the scripts running will give us an opportunity to diagnose and devise a mitigation strategy for the protocol implementation.
I like the approach and fully support it, given that it is not considered a production characteristic. Suspect you have 2 eager endusers in Deadgray and myself for any patches you would like to try. Being linux CLI users, that should be easier to deploy for us.
FYI: Up to 4591 connections to that subnet supporting 137G of shards.
Note that if this was a real, unplanned situation, we would both simply block that ip range of the attackers via iptables, as Deadgray initially did.
@bookchin You told me that the bridge will send the contract to the nodeid close to shard hash. The bridge should send the contracts more or less to a random node. Is the bridge uploading so many shards that every node is effected or is it sending many contracts to the same node? This issue looks like it is always the same node.
I send th3van my SSH key. If I can get access to the bridge logfile I should be able to find the problem.
@littleskunk that's correct, the shard distribution should be pretty random. I'd be interested in seeing what you might find.
Varies a lot from my experience. One run of several hour (that I watched) had 95% going to one node, from one bridge node. Other times I see upwards of a half-dozen active bridge nodes. Currently around 4200 connections, netstat -n output attached for your analysis. netstatn.txt
Just a reminder about my setup: Ports 4000-4009 are node 0, 4010-4019 are node 1... 4070-4079 are node 7.
Since setup, the 8 nodes have randomly received between 5gb and 51gb - not an even distribution but not horribly out a whack. http://puu.sh/pY3iK/99c800bdd1.png - 172GB in total.
Bridge is sending to many SUBSCRIBE messages. See https://github.com/Storj/core/issues/257
Lets ignore the SUBSCRIBE messages and focus on PUBLISH messages in this issue:
76462 sending PUBLISH messages in one hour.
skunk@Server010:~$ grep 'sending PUBLISH' onehourbridgelog.log | wc -l
76462
The dashboard shows ~1000 farmer. I would expect ~70 PUBLISH messages for each node.
Lets check some nodeIDs
skunk@Server010:~$ grep 'sending PUBLISH' onehourbridgelog.log | grep 'c5c230c448f0d2619e018d30204221f52b8fe35c' | wc -l
26299
skunk@Server010:~$ grep 'sending PUBLISH' onehourbridgelog.log | grep 'c340d4d6491b67d3699661c8a4ac3234d253081a' | wc -l
20509
skunk@Server010:~$ grep 'sending PUBLISH' onehourbridgelog.log | grep 'c16461f35107615bd2558fc9e3c415ac93395fb5' | wc -l
3617
I did not check all nodeIDs. More than 50% of the PUBLISH messages are going to the same 2 nodes. It looks like the bridge is sending to nodeIDs close to its own ID and not as expected close to the shard hash.
And last but not least 10 bridge minions are sending duplicate PUBLISH messages. See #258
@littleskunk if you have ssh access the *.dk bridges, can you confirm they are running kad-quasar v0.1.3? If they are not, that's our DoS issue. Should just be able to run npm list kad-quasar
.
I just checked, and all fourteen bridges (*.storj.dk) are running kad-quasar@0.1.3 / kad@1.5.16 :
root@Server012:~/bridge# npm list kad-quasar
storj-bridge@0.10.0 /root/bridge
└─┬ storj@2.0.5
└── kad-quasar@0.1.3
root@Server012:~/bridge# npm list kad
storj-bridge@0.10.0 /root/bridge
└─┬ storj@2.0.5
└── kad@1.5.16
Well damn. I'll keep digging.
@deadgray have you seen and DDOS activity since the shard shutdown? @bookchin going to be hard to test any fixes unless shards start appearing again. I currently have ZERO bridge activity on either of my CLI systems(5 nodes in total) nor on my local GUI box.
@1credit I have access to the bridge logfile. A simple count and I can see the "bad result". I would expect a ~ equal number of PUBLISH message to every node and not 30% of the messages to only one node.
The more nodes we get the lower the risk is for any of us to be the spammed node.
I pulled the latest "1-hour-stats" from the storj.dk bridge server, to confirm that they are actuality working and sending out as many shards as possible to farmers. On a side note, i had to move /tmp to memory using tmpfs, since our old 48 spindle HDD SAN was not fast enough to handle the R/W of the hungry fourteen bridge servers.
Shards send : 36835 (or 10,2 shards / sec.) Bytes send : 309 GBytes Unique nodeID's : 407 Number of servers : 14 Number of minions pr. server : 10
@bookchin :
I have comment out the "this._options.seeds.length" (and the following 2 lines in the ./node_modules/storj/lib/network/index.js
file), in a attempt to get the minions to connect to as many different seeds (pulled from the http://api.storj.io/contacts?connected=true
list) as possible. I found out that if restarting a bridge server, all 10 minions would only connect to the same seed, resulting in shards offers being routed thru this single node, and only a few farmers was getting a ton of shards in a very short amount of time. Eventually, the minions would over time try to connect to other seeds, but it could go on for hours, before that happened, and still only a few of the 10 minions would have re-connected to a new seed. So by commenting out those 3 lines, i managed to the the minions to connect to a lot more different seeds (sometime even 10 unique seeds). When doing so, the shard offers seemed to be distributed across more seeds/farmers = more unique farmers / node ID's was getting offers. Counting the unique node ID's after 24 hours, would be like ~225 if the same seed was use on all 10 minions, and ~500-600 unique ID's if each minion used a unique seed.
Now this is all based on the storj (1.4.0) / bridge (0.9.3), and not the current release of the storj (2.0.5) / Bridge (0.10.0), but I am planning on doing a test soon(tm), and create an new issue if this is still happening.
My theory is that by using unique seeds, the shards are spread more equally, but 140 minions might no be enough, since @1credit are still not seeing any bridge activity. Maybe a few more restarts would connect him to a node that is closer to me, so he would start getting offers? - or even try to set one of my bridges as a seed might help?
I'd be thrilled to hard code in your bridge as a seed. Can you provide the URL line for my config.json files? Feel free to email me if you don't want to post that publically. Github should now be showing my public e-mail address.
@th3van excellent sleuthing! Thanks for taking the time to go through this.
@1credit if you just send a GET request to whatever bridge you like, you'll get back JSON. Look for the x-network-seeds property and pick a few if those.
@bookchin - alas, your just jumped WAY outside of my knowledge circle. If you care to baby step me through that, I'll probably learn a lot... Alas I'm a Unix admin type person, and not terribly web-app literate.
@1credit @Th3Van See https://github.com/Storj/core/issues/221 I assume that is the reason why not every node is receiving the PUBLISH messages.
@1credit just navigate to http://server013.storj.dk/ for instance. That will send back a schema that describes the API, but also includes a list of the network seeds that bridge is running. In this case:
[
"storj://server013.storj.dk:5013/869025a2c19c667adc13741ac7473b7908243949",
"storj://server013.storj.dk:5113/869025a2c19c667adc13741ac7473b7908243949",
"storj://server013.storj.dk:5213/869025a2c19c667adc13741ac7473b7908243949",
"storj://server013.storj.dk:5313/869025a2c19c667adc13741ac7473b7908243949",
"storj://server013.storj.dk:5413/869025a2c19c667adc13741ac7473b7908243949",
"storj://server013.storj.dk:5513/869025a2c19c667adc13741ac7473b7908243949",
"storj://server013.storj.dk:5613/869025a2c19c667adc13741ac7473b7908243949",
"storj://server013.storj.dk:5713/869025a2c19c667adc13741ac7473b7908243949",
"storj://server013.storj.dk:5813/869025a2c19c667adc13741ac7473b7908243949",
"storj://server013.storj.dk:5913/869025a2c19c667adc13741ac7473b7908243949"
]
@bookchin @Th3Van The Fix is only a dirty workaround and after a few days all minions should spam the same nodeID again.
Lets focus on one bridge minion and ignore the other 9. This one minion should send out the PUBLISH message to the nodeID close to file hash. It is not doing that and the dicussed fix will not change that.
The dirty workaround would fill the contact list of all 10 minions with different nodeIDs. At that time they will send the PUBLISH messages to 30 different nodeIDs. Give them a few hours / days to fill there contact list with more nodeIDs. All 10 minions are running with the same nodeID. At the end they should come to the same result. All 10 minions should find the perfect 3 neighbor nodeIDs and spam them to death.
Perfect, thank you. I was checking the connected output per the troubleguide for finding seeds, but that wasn't getting me very far.
(5) CLI nodes from 66.128.118.3x are now configured with those seeds, as is (1) GUI at 216.146.251.13
Update: seeing one of those nodes with 10 connections to .13 and 16 to .23 - and its eating CPU which is generally a better thing that not. The other (4) nodes are not showing any connections to anyone (although I saw them connect to .13 earlier)
I'm still stumped on why those bridges aren't publishing to the correct nodes. The core tests pass, the kad and kad-quasar tests pass, and our bridges do not behave this way. :/
If there is anything I can test, just let me know. Still sitting with one server with a single node having 13 connections to bridge 13, and 17 connections to bridge 23 - merrily eating 20-30% of a CPU. (4) other nodes on different ports (behind the same firewall) are complete idle with no connections (although I see the port 13 count bounce up once in awhile). Cut and pasted the same seed nodes into all 5 nodes config.json.
Its a bit tricky, but I could change public IPs to separate them behind different firewalls if you think that would show anything. Just need to be careful to get the command order right or I'll lose connection to them until Tuesday when I'm physically with them again.
So, for the life of me, I simply cannot reproduce this when acting as a renter. Based on the feedback so far, I would expect that when I upload a fair amount of data to a storj.dk bridge node, that my client will end up showing me that regardless of the shard hash, I'm going to end up pumping my data to the same handful of farmers.
But this is not the case. In fact, in my simple test upload of a ~250mb file, I find that when going through the storj.dk bridge, I get a wide spread of nodeIDs that seem to be consistent with the proper distance from each shard hash.
Here is some log output from that:
gordon@gordonhall ~ [18:12:36]
> $ STORJ_BRIDGE=http://server013.storj.dk storj upload-file 57953d54b4ef9cbf12d60961 /home/gordon/Videos/[REDACTED] ⬡ 4.4.3
[...] > Enter your passphrase to unlock your keyring > ********
[Sun Jul 24 2016 18:13:13 GMT-0400 (EDT)] [info] Generating encryption key...
[Sun Jul 24 2016 18:13:13 GMT-0400 (EDT)] [info] Encrypting file "/home/gordon/Videos/OverTheGardenWall/1&2 over the garden wall.mp4"
[Sun Jul 24 2016 18:13:15 GMT-0400 (EDT)] [info] Encryption complete!
[Sun Jul 24 2016 18:13:15 GMT-0400 (EDT)] [info] Creating storage token...
[Sun Jul 24 2016 18:13:16 GMT-0400 (EDT)] [info] Storing file, hang tight!
[Sun Jul 24 2016 18:13:16 GMT-0400 (EDT)] [info] Trying to upload shard /tmp/ca2f520f32e4 index 0
[Sun Jul 24 2016 18:13:16 GMT-0400 (EDT)] [info] Trying to upload shard /tmp/67a4f30ad4fa index 1
[Sun Jul 24 2016 18:13:16 GMT-0400 (EDT)] [info] Hash for this shard is: 79399bc235e0bb4340a4bc79a4c47064c3c0bf9c
[Sun Jul 24 2016 18:13:17 GMT-0400 (EDT)] [info] Trying to upload shard /tmp/3f5e87441fe4 index 2
[Sun Jul 24 2016 18:13:17 GMT-0400 (EDT)] [info] Hash for this shard is: 372d208210ba88a201d12e444c95b2ddb7f4df2f
[Sun Jul 24 2016 18:13:17 GMT-0400 (EDT)] [info] Audit generation for shard done.
[Sun Jul 24 2016 18:13:17 GMT-0400 (EDT)] [info] Waiting on a storage offer from the network...
[Sun Jul 24 2016 18:13:17 GMT-0400 (EDT)] [info] Adding shard metadata for 79399bc235e0bb4340a4bc79a4c47064c3c0bf9c to frame
[Sun Jul 24 2016 18:13:17 GMT-0400 (EDT)] [info] Trying to upload shard /tmp/bf484c672b18 index 3
[Sun Jul 24 2016 18:13:17 GMT-0400 (EDT)] [info] Hash for this shard is: c4936e27a22b4e5889be3b5d84b41a77514045cd
[Sun Jul 24 2016 18:13:17 GMT-0400 (EDT)] [info] Audit generation for shard done.
[Sun Jul 24 2016 18:13:17 GMT-0400 (EDT)] [info] Waiting on a storage offer from the network...
[Sun Jul 24 2016 18:13:17 GMT-0400 (EDT)] [info] Adding shard metadata for 372d208210ba88a201d12e444c95b2ddb7f4df2f to frame
[Sun Jul 24 2016 18:13:17 GMT-0400 (EDT)] [info] Trying to upload shard /tmp/d113ed538d8d index 4
[Sun Jul 24 2016 18:13:17 GMT-0400 (EDT)] [info] Hash for this shard is: b41516834f7ea4f246c3393e975f9fec28ad763a
[Sun Jul 24 2016 18:13:17 GMT-0400 (EDT)] [info] Audit generation for shard done.
[Sun Jul 24 2016 18:13:17 GMT-0400 (EDT)] [info] Waiting on a storage offer from the network...
[Sun Jul 24 2016 18:13:17 GMT-0400 (EDT)] [info] Adding shard metadata for c4936e27a22b4e5889be3b5d84b41a77514045cd to frame
[Sun Jul 24 2016 18:13:17 GMT-0400 (EDT)] [info] Trying to upload shard /tmp/c2528e88d57c index 5
[Sun Jul 24 2016 18:13:17 GMT-0400 (EDT)] [info] Hash for this shard is: b39576da1999bc21a558b317fcc0382c61280f47
[Sun Jul 24 2016 18:13:17 GMT-0400 (EDT)] [info] Audit generation for shard done.
[Sun Jul 24 2016 18:13:17 GMT-0400 (EDT)] [info] Waiting on a storage offer from the network...
[Sun Jul 24 2016 18:13:17 GMT-0400 (EDT)] [info] Adding shard metadata for b41516834f7ea4f246c3393e975f9fec28ad763a to frame
[Sun Jul 24 2016 18:13:18 GMT-0400 (EDT)] [info] Hash for this shard is: fdf989a58d8893097bdadbccae10391150279b8d
[Sun Jul 24 2016 18:13:18 GMT-0400 (EDT)] [info] Audit generation for shard done.
[Sun Jul 24 2016 18:13:18 GMT-0400 (EDT)] [info] Waiting on a storage offer from the network...
[Sun Jul 24 2016 18:13:18 GMT-0400 (EDT)] [info] Adding shard metadata for b39576da1999bc21a558b317fcc0382c61280f47 to frame
[Sun Jul 24 2016 18:13:18 GMT-0400 (EDT)] [info] Audit generation for shard done.
[Sun Jul 24 2016 18:13:18 GMT-0400 (EDT)] [info] Waiting on a storage offer from the network...
[Sun Jul 24 2016 18:13:18 GMT-0400 (EDT)] [info] Adding shard metadata for fdf989a58d8893097bdadbccae10391150279b8d to frame
[Sun Jul 24 2016 18:13:19 GMT-0400 (EDT)] [info] Contract negotiated with: {"protocol":"0.7.1","address":"176.252.30.0","port":4000,"nodeID":"688bebae7b23c42fc2d042c485c12a88a126d6f4","lastSeen":1469398398164}
[Sun Jul 24 2016 18:13:19 GMT-0400 (EDT)] [info] Data channel opened, transferring shard...
[Sun Jul 24 2016 18:13:19 GMT-0400 (EDT)] [info] Contract negotiated with: {"protocol":"0.7.1","address":"149.202.218.123","port":4230,"nodeID":"fc31c60b7502d16e5535cd9d4267e003ef3ea310","lastSeen":1469398399326}
[Sun Jul 24 2016 18:13:20 GMT-0400 (EDT)] [info] Data channel opened, transferring shard...
[Sun Jul 24 2016 18:13:22 GMT-0400 (EDT)] [info] Contract negotiated with: {"protocol":"0.7.1","address":"71.71.203.175","port":20009,"nodeID":"29a0b5fe194ae2a3150b0d76fda962a5fc4c25e6","lastSeen":1469398398498}
[Sun Jul 24 2016 18:13:22 GMT-0400 (EDT)] [info] Data channel opened, transferring shard...
[Sun Jul 24 2016 18:13:27 GMT-0400 (EDT)] [info] Shard transfer completed! 25 remaining...
[Sun Jul 24 2016 18:13:27 GMT-0400 (EDT)] [info] Trying to upload shard /tmp/eac5a54d3f8b index 6
[Sun Jul 24 2016 18:13:27 GMT-0400 (EDT)] [info] Hash for this shard is: 4c60cd28897815f1c19cff74368e8f08150b83fa
[Sun Jul 24 2016 18:13:27 GMT-0400 (EDT)] [info] Audit generation for shard done.
[Sun Jul 24 2016 18:13:27 GMT-0400 (EDT)] [info] Waiting on a storage offer from the network...
[Sun Jul 24 2016 18:13:27 GMT-0400 (EDT)] [info] Adding shard metadata for 4c60cd28897815f1c19cff74368e8f08150b83fa to frame
[Sun Jul 24 2016 18:13:29 GMT-0400 (EDT)] [info] Contract negotiated with: {"protocol":"0.7.1","address":"5.9.94.67","port":4023,"nodeID":"4fc1f379a4f16bbcfbe644de5fab3ebf06765a5d","lastSeen":1469398408822}
[Sun Jul 24 2016 18:13:30 GMT-0400 (EDT)] [info] Shard transfer completed! 24 remaining...
[Sun Jul 24 2016 18:13:30 GMT-0400 (EDT)] [info] Trying to upload shard /tmp/cddf3768fa33 index 7
[Sun Jul 24 2016 18:13:30 GMT-0400 (EDT)] [info] Data channel opened, transferring shard...
[Sun Jul 24 2016 18:13:30 GMT-0400 (EDT)] [info] Hash for this shard is: ee18db570a5eb3e11478ebaf4c4c54346edf07e1
[Sun Jul 24 2016 18:13:30 GMT-0400 (EDT)] [info] Audit generation for shard done.
[Sun Jul 24 2016 18:13:30 GMT-0400 (EDT)] [info] Waiting on a storage offer from the network...
[Sun Jul 24 2016 18:13:30 GMT-0400 (EDT)] [info] Adding shard metadata for ee18db570a5eb3e11478ebaf4c4c54346edf07e1 to frame
[Sun Jul 24 2016 18:13:31 GMT-0400 (EDT)] [info] Contract negotiated with: {"protocol":"0.7.1","address":"149.202.218.123","port":4120,"nodeID":"fc31c60b7502d16e5535cd9d4267e003ef3ea310","lastSeen":1469398411211}
[Sun Jul 24 2016 18:13:31 GMT-0400 (EDT)] [info] Shard transfer completed! 23 remaining...
[Sun Jul 24 2016 18:13:31 GMT-0400 (EDT)] [info] Trying to upload shard /tmp/bfd31692d3ea index 8
[Sun Jul 24 2016 18:13:31 GMT-0400 (EDT)] [info] Hash for this shard is: 4d6e343e710cd411871c6703afac337dc7fcf79d
[Sun Jul 24 2016 18:13:31 GMT-0400 (EDT)] [info] Data channel opened, transferring shard...
[Sun Jul 24 2016 18:13:31 GMT-0400 (EDT)] [info] Audit generation for shard done.
[Sun Jul 24 2016 18:13:31 GMT-0400 (EDT)] [info] Waiting on a storage offer from the network...
[Sun Jul 24 2016 18:13:31 GMT-0400 (EDT)] [info] Adding shard metadata for 4d6e343e710cd411871c6703afac337dc7fcf79d to frame
[Sun Jul 24 2016 18:13:37 GMT-0400 (EDT)] [info] Shard transfer completed! 22 remaining...
[Sun Jul 24 2016 18:13:37 GMT-0400 (EDT)] [info] Trying to upload shard /tmp/7b9272b2a7f8 index 9
[Sun Jul 24 2016 18:13:37 GMT-0400 (EDT)] [info] Hash for this shard is: 3580cf65c2ff024e60c9013e44f211b71e136788
[Sun Jul 24 2016 18:13:37 GMT-0400 (EDT)] [info] Audit generation for shard done.
[Sun Jul 24 2016 18:13:37 GMT-0400 (EDT)] [info] Waiting on a storage offer from the network...
[Sun Jul 24 2016 18:13:37 GMT-0400 (EDT)] [info] Adding shard metadata for 3580cf65c2ff024e60c9013e44f211b71e136788 to frame
[Sun Jul 24 2016 18:13:38 GMT-0400 (EDT)] [info] Shard transfer completed! 21 remaining...
[Sun Jul 24 2016 18:13:38 GMT-0400 (EDT)] [info] Trying to upload shard /tmp/91ef4d50e95f index 10
[Sun Jul 24 2016 18:13:38 GMT-0400 (EDT)] [info] Hash for this shard is: a7d8d67032665ea209c2e0cdbb726c378046b000
[Sun Jul 24 2016 18:13:39 GMT-0400 (EDT)] [info] Audit generation for shard done.
[Sun Jul 24 2016 18:13:39 GMT-0400 (EDT)] [info] Waiting on a storage offer from the network...
[Sun Jul 24 2016 18:13:39 GMT-0400 (EDT)] [info] Adding shard metadata for a7d8d67032665ea209c2e0cdbb726c378046b000 to frame
[Sun Jul 24 2016 18:13:40 GMT-0400 (EDT)] [info] Contract negotiated with: {"protocol":"0.7.1","address":"87.139.82.233","port":8443,"nodeID":"a5fc3518d535584bb5e0ec0b456ce53ebfb88af7","lastSeen":1469398419890}
[Sun Jul 24 2016 18:13:41 GMT-0400 (EDT)] [info] Data channel opened, transferring shard...
[Sun Jul 24 2016 18:13:47 GMT-0400 (EDT)] [info] Adding shard metadata for b41516834f7ea4f246c3393e975f9fec28ad763a to frame
[Sun Jul 24 2016 18:13:51 GMT-0400 (EDT)] [info] Contract negotiated with: {"protocol":"0.7.1","address":"71.239.125.150","port":4100,"nodeID":"b0f82d3de7f0d6e51e9b3f1f7bce53af8883a438","lastSeen":1469398429624}
[Sun Jul 24 2016 18:13:51 GMT-0400 (EDT)] [info] Data channel opened, transferring shard...
[Sun Jul 24 2016 18:13:57 GMT-0400 (EDT)] [info] Shard transfer completed! 20 remaining...
[Sun Jul 24 2016 18:13:57 GMT-0400 (EDT)] [info] Trying to upload shard /tmp/e315a0fc6433 index 11
[Sun Jul 24 2016 18:13:58 GMT-0400 (EDT)] [info] Hash for this shard is: f9e0dbc7e50f8a403742d4ae919366ce0befdf37
[Sun Jul 24 2016 18:13:58 GMT-0400 (EDT)] [info] Audit generation for shard done.
[Sun Jul 24 2016 18:13:58 GMT-0400 (EDT)] [info] Waiting on a storage offer from the network...
[Sun Jul 24 2016 18:13:58 GMT-0400 (EDT)] [info] Adding shard metadata for f9e0dbc7e50f8a403742d4ae919366ce0befdf37 to frame
[Sun Jul 24 2016 18:14:00 GMT-0400 (EDT)] [info] Adding shard metadata for 4d6e343e710cd411871c6703afac337dc7fcf79d to frame
[Sun Jul 24 2016 18:14:01 GMT-0400 (EDT)] [info] Shard transfer completed! 19 remaining...
[Sun Jul 24 2016 18:14:01 GMT-0400 (EDT)] [info] Trying to upload shard /tmp/471f8d1bd23a index 12
[Sun Jul 24 2016 18:14:01 GMT-0400 (EDT)] [info] Hash for this shard is: 55b8428b9d1cf903b1de1ad5a27596d9fdfbc0a8
[Sun Jul 24 2016 18:14:01 GMT-0400 (EDT)] [info] Audit generation for shard done.
[Sun Jul 24 2016 18:14:01 GMT-0400 (EDT)] [info] Waiting on a storage offer from the network...
[Sun Jul 24 2016 18:14:01 GMT-0400 (EDT)] [info] Adding shard metadata for 55b8428b9d1cf903b1de1ad5a27596d9fdfbc0a8 to frame
[Sun Jul 24 2016 18:14:08 GMT-0400 (EDT)] [info] Contract negotiated with: {"protocol":"0.7.1","address":"88.87.252.34","port":4000,"nodeID":"3d8f5195f45f98a97135074e0556dcb5958607ae","lastSeen":1469398420274}
[Sun Jul 24 2016 18:14:08 GMT-0400 (EDT)] [info] Data channel opened, transferring shard...
[Sun Jul 24 2016 18:14:16 GMT-0400 (EDT)] [info] Shard transfer completed! 18 remaining...
[Sun Jul 24 2016 18:14:16 GMT-0400 (EDT)] [info] Trying to upload shard /tmp/1cd0062af6db index 13
[Sun Jul 24 2016 18:14:16 GMT-0400 (EDT)] [info] Hash for this shard is: f25447b5f3084530e80fe6e77b1eff8ff5325c6b
[Sun Jul 24 2016 18:14:16 GMT-0400 (EDT)] [info] Audit generation for shard done.
[Sun Jul 24 2016 18:14:16 GMT-0400 (EDT)] [info] Waiting on a storage offer from the network...
[Sun Jul 24 2016 18:14:16 GMT-0400 (EDT)] [info] Adding shard metadata for f25447b5f3084530e80fe6e77b1eff8ff5325c6b to frame
[Sun Jul 24 2016 18:14:17 GMT-0400 (EDT)] [info] Contract negotiated with: {"protocol":"0.7.1","address":"149.202.218.123","port":4230,"nodeID":"fc31c60b7502d16e5535cd9d4267e003ef3ea310","lastSeen":1469398457451}
[Sun Jul 24 2016 18:14:18 GMT-0400 (EDT)] [info] Data channel opened, transferring shard...
[Sun Jul 24 2016 18:14:18 GMT-0400 (EDT)] [info] Adding shard metadata for c4936e27a22b4e5889be3b5d84b41a77514045cd to frame
[Sun Jul 24 2016 18:14:18 GMT-0400 (EDT)] [info] Adding shard metadata for b39576da1999bc21a558b317fcc0382c61280f47 to frame
[Sun Jul 24 2016 18:14:24 GMT-0400 (EDT)] [info] Shard transfer completed! 17 remaining...
[Sun Jul 24 2016 18:14:24 GMT-0400 (EDT)] [info] Trying to upload shard /tmp/8f30dd22b312 index 14
[Sun Jul 24 2016 18:14:24 GMT-0400 (EDT)] [info] Hash for this shard is: d0c6a1bea4d4a40856d8e247ca46849dc76a2f46
[Sun Jul 24 2016 18:14:24 GMT-0400 (EDT)] [info] Audit generation for shard done.
[Sun Jul 24 2016 18:14:24 GMT-0400 (EDT)] [info] Waiting on a storage offer from the network...
[Sun Jul 24 2016 18:14:24 GMT-0400 (EDT)] [info] Adding shard metadata for d0c6a1bea4d4a40856d8e247ca46849dc76a2f46 to frame
[Sun Jul 24 2016 18:14:27 GMT-0400 (EDT)] [info] Contract negotiated with: {"protocol":"0.7.1","address":"sj0.logicfailed.com","port":34001,"nodeID":"4ebb2a1258d8a6abab5b48353e55bb913d87c84e","lastSeen":1469398442556}
[Sun Jul 24 2016 18:14:27 GMT-0400 (EDT)] [info] Adding shard metadata for 4d6e343e710cd411871c6703afac337dc7fcf79d to frame
[Sun Jul 24 2016 18:14:27 GMT-0400 (EDT)] [info] Data channel opened, transferring shard...
[Sun Jul 24 2016 18:14:28 GMT-0400 (EDT)] [info] Contract negotiated with: {"protocol":"0.7.1","address":"80.237.71.105","port":5140,"nodeID":"d0656e0cce1cf5f489362be46c3eb778e76210e8","lastSeen":1469398466736}
[Sun Jul 24 2016 18:14:28 GMT-0400 (EDT)] [info] Data channel opened, transferring shard...
[Sun Jul 24 2016 18:14:34 GMT-0400 (EDT)] [info] Shard transfer completed! 16 remaining...
[Sun Jul 24 2016 18:14:34 GMT-0400 (EDT)] [info] Trying to upload shard /tmp/e497252d905c index 15
[Sun Jul 24 2016 18:14:34 GMT-0400 (EDT)] [info] Hash for this shard is: 6dfe6c40681c7bf49c719821f3cb74100fbd38ab
[Sun Jul 24 2016 18:14:34 GMT-0400 (EDT)] [info] Audit generation for shard done.
[Sun Jul 24 2016 18:14:34 GMT-0400 (EDT)] [info] Waiting on a storage offer from the network...
[Sun Jul 24 2016 18:14:34 GMT-0400 (EDT)] [info] Adding shard metadata for 6dfe6c40681c7bf49c719821f3cb74100fbd38ab to frame
[Sun Jul 24 2016 18:14:35 GMT-0400 (EDT)] [info] Shard transfer completed! 15 remaining...
[Sun Jul 24 2016 18:14:35 GMT-0400 (EDT)] [info] Trying to upload shard /tmp/c8cab9682479 index 16
[Sun Jul 24 2016 18:14:35 GMT-0400 (EDT)] [info] Hash for this shard is: 14332349303c47f5bc959e1031fcecb47d737635
[Sun Jul 24 2016 18:14:35 GMT-0400 (EDT)] [info] Audit generation for shard done.
[Sun Jul 24 2016 18:14:35 GMT-0400 (EDT)] [info] Waiting on a storage offer from the network...
[Sun Jul 24 2016 18:14:35 GMT-0400 (EDT)] [info] Adding shard metadata for 14332349303c47f5bc959e1031fcecb47d737635 to frame
[Sun Jul 24 2016 18:14:40 GMT-0400 (EDT)] [info] Contract negotiated with: {"protocol":"0.7.1","address":"calypso.seekbrain.com","port":4000,"nodeID":"d589017bde2bc5bb2978d3407bcd4eaaebbeab12","lastSeen":1469398459453}
[Sun Jul 24 2016 18:14:41 GMT-0400 (EDT)] [info] Data channel opened, transferring shard...
[Sun Jul 24 2016 18:14:42 GMT-0400 (EDT)] [info] Adding shard metadata for 4d6e343e710cd411871c6703afac337dc7fcf79d to frame
[Sun Jul 24 2016 18:14:49 GMT-0400 (EDT)] [info] Shard transfer completed! 14 remaining...
[Sun Jul 24 2016 18:14:49 GMT-0400 (EDT)] [info] Trying to upload shard /tmp/e94a1986d56d index 17
[Sun Jul 24 2016 18:14:49 GMT-0400 (EDT)] [info] Hash for this shard is: 75f7f721d06d4babeb910de4e4573ae19eca1e94
[Sun Jul 24 2016 18:14:49 GMT-0400 (EDT)] [info] Audit generation for shard done.
[Sun Jul 24 2016 18:14:49 GMT-0400 (EDT)] [info] Waiting on a storage offer from the network...
[Sun Jul 24 2016 18:14:49 GMT-0400 (EDT)] [info] Adding shard metadata for 75f7f721d06d4babeb910de4e4573ae19eca1e94 to frame
[Sun Jul 24 2016 18:14:51 GMT-0400 (EDT)] [info] Contract negotiated with: {"protocol":"0.7.1","address":"95.172.185.136","port":34059,"nodeID":"70201a823d9e622af168f58ea6c54059b940c7d4","lastSeen":1469398490956}
[Sun Jul 24 2016 18:14:51 GMT-0400 (EDT)] [info] Data channel opened, transferring shard...
[Sun Jul 24 2016 18:14:58 GMT-0400 (EDT)] [info] Adding shard metadata for 14332349303c47f5bc959e1031fcecb47d737635 to frame
[Sun Jul 24 2016 18:14:58 GMT-0400 (EDT)] [info] Adding shard metadata for 4d6e343e710cd411871c6703afac337dc7fcf79d to frame
[Sun Jul 24 2016 18:14:58 GMT-0400 (EDT)] [info] Adding shard metadata for f9e0dbc7e50f8a403742d4ae919366ce0befdf37 to frame
[Sun Jul 24 2016 18:15:02 GMT-0400 (EDT)] [info] Contract negotiated with: {"protocol":"0.7.1","address":"5.9.94.67","port":4023,"nodeID":"4fc1f379a4f16bbcfbe644de5fab3ebf06765a5d","lastSeen":1469398500702}
[Sun Jul 24 2016 18:15:02 GMT-0400 (EDT)] [info] Data channel opened, transferring shard...
[Sun Jul 24 2016 18:15:09 GMT-0400 (EDT)] [info] Shard transfer completed! 13 remaining...
[Sun Jul 24 2016 18:15:09 GMT-0400 (EDT)] [info] Trying to upload shard /tmp/55a4319354f1 index 18
[Sun Jul 24 2016 18:15:09 GMT-0400 (EDT)] [info] Hash for this shard is: d90a3b7757d634a29be5bf5330dfa46c9573e683
[Sun Jul 24 2016 18:15:10 GMT-0400 (EDT)] [info] Audit generation for shard done.
[Sun Jul 24 2016 18:15:10 GMT-0400 (EDT)] [info] Waiting on a storage offer from the network...
[Sun Jul 24 2016 18:15:10 GMT-0400 (EDT)] [info] Adding shard metadata for d90a3b7757d634a29be5bf5330dfa46c9573e683 to frame
[Sun Jul 24 2016 18:15:19 GMT-0400 (EDT)] [info] Adding shard metadata for b39576da1999bc21a558b317fcc0382c61280f47 to frame
[Sun Jul 24 2016 18:15:20 GMT-0400 (EDT)] [info] Contract negotiated with: {"protocol":"0.7.0","address":"204.111.240.84","port":4060,"nodeID":"b2c94877301c36dd76d81c1f683458b1250ff9cb","lastSeen":1469398520094}
[Sun Jul 24 2016 18:15:20 GMT-0400 (EDT)] [info] Data channel opened, transferring shard...
[Sun Jul 24 2016 18:15:25 GMT-0400 (EDT)] [info] Adding shard metadata for d90a3b7757d634a29be5bf5330dfa46c9573e683 to frame
[Sun Jul 24 2016 18:15:30 GMT-0400 (EDT)] [info] Shard transfer completed! 12 remaining...
[Sun Jul 24 2016 18:15:30 GMT-0400 (EDT)] [info] Trying to upload shard /tmp/642c2217e555 index 19
[Sun Jul 24 2016 18:15:30 GMT-0400 (EDT)] [info] Hash for this shard is: 0815d955f39cf0647d36df5bfa72f1b86227eb5e
[Sun Jul 24 2016 18:15:30 GMT-0400 (EDT)] [info] Audit generation for shard done.
[Sun Jul 24 2016 18:15:30 GMT-0400 (EDT)] [info] Waiting on a storage offer from the network...
[Sun Jul 24 2016 18:15:30 GMT-0400 (EDT)] [info] Adding shard metadata for 0815d955f39cf0647d36df5bfa72f1b86227eb5e to frame
[Sun Jul 24 2016 18:15:32 GMT-0400 (EDT)] [info] Contract negotiated with: {"protocol":"0.7.1","address":"78.157.164.38","port":4120,"nodeID":"1d419ad97dce6b97f78d6c2f59e051034ab8fa97","lastSeen":1469398531704}
[Sun Jul 24 2016 18:15:32 GMT-0400 (EDT)] [info] Data channel opened, transferring shard...
[Sun Jul 24 2016 18:15:34 GMT-0400 (EDT)] [info] Adding shard metadata for 6dfe6c40681c7bf49c719821f3cb74100fbd38ab to frame
[Sun Jul 24 2016 18:15:42 GMT-0400 (EDT)] [info] Shard transfer completed! 11 remaining...
[Sun Jul 24 2016 18:15:42 GMT-0400 (EDT)] [info] Trying to upload shard /tmp/a8a470e343c1 index 20
[Sun Jul 24 2016 18:15:42 GMT-0400 (EDT)] [info] Hash for this shard is: 5c9a976c53bf258921d6eaff2328d6dbf21e092d
[Sun Jul 24 2016 18:15:43 GMT-0400 (EDT)] [info] Audit generation for shard done.
[Sun Jul 24 2016 18:15:43 GMT-0400 (EDT)] [info] Waiting on a storage offer from the network...
[Sun Jul 24 2016 18:15:43 GMT-0400 (EDT)] [info] Adding shard metadata for 5c9a976c53bf258921d6eaff2328d6dbf21e092d to frame
[Sun Jul 24 2016 18:15:43 GMT-0400 (EDT)] [info] Shard transfer completed! 10 remaining...
[Sun Jul 24 2016 18:15:43 GMT-0400 (EDT)] [info] Trying to upload shard /tmp/82d893212c0c index 21
[Sun Jul 24 2016 18:15:43 GMT-0400 (EDT)] [info] Hash for this shard is: f58c5fc584228d2e0aad299c48b66670e0b91621
[Sun Jul 24 2016 18:15:44 GMT-0400 (EDT)] [info] Audit generation for shard done.
[Sun Jul 24 2016 18:15:44 GMT-0400 (EDT)] [info] Waiting on a storage offer from the network...
[Sun Jul 24 2016 18:15:44 GMT-0400 (EDT)] [info] Adding shard metadata for f58c5fc584228d2e0aad299c48b66670e0b91621 to frame
[Sun Jul 24 2016 18:15:46 GMT-0400 (EDT)] [info] Contract negotiated with: {"protocol":"0.7.1","address":"sj0.logicfailed.com","port":34001,"nodeID":"4ebb2a1258d8a6abab5b48353e55bb913d87c84e","lastSeen":1469398544164}
[Sun Jul 24 2016 18:15:46 GMT-0400 (EDT)] [info] Data channel opened, transferring shard...
[Sun Jul 24 2016 18:15:48 GMT-0400 (EDT)] [info] Adding shard metadata for d90a3b7757d634a29be5bf5330dfa46c9573e683 to frame
[Sun Jul 24 2016 18:15:54 GMT-0400 (EDT)] [info] Shard transfer completed! 9 remaining...
[Sun Jul 24 2016 18:15:54 GMT-0400 (EDT)] [info] Trying to upload shard /tmp/8dad2aad4f90 index 22
[Sun Jul 24 2016 18:15:54 GMT-0400 (EDT)] [info] Hash for this shard is: eedbbaaf6ecece1a6e8b7d8b946f57737283d3f6
[Sun Jul 24 2016 18:15:54 GMT-0400 (EDT)] [info] Audit generation for shard done.
[Sun Jul 24 2016 18:15:54 GMT-0400 (EDT)] [info] Waiting on a storage offer from the network...
[Sun Jul 24 2016 18:15:54 GMT-0400 (EDT)] [info] Adding shard metadata for eedbbaaf6ecece1a6e8b7d8b946f57737283d3f6 to frame
[Sun Jul 24 2016 18:15:56 GMT-0400 (EDT)] [info] Contract negotiated with: {"protocol":"0.7.1","address":"173.206.92.114","port":4000,"nodeID":"ef28179632549c53122c1c5c29a19bbc20d7e2ed","lastSeen":1469398555829}
[Sun Jul 24 2016 18:15:56 GMT-0400 (EDT)] [info] Data channel opened, transferring shard...
[Sun Jul 24 2016 18:15:58 GMT-0400 (EDT)] [info] Adding shard metadata for 14332349303c47f5bc959e1031fcecb47d737635 to frame
[Sun Jul 24 2016 18:15:58 GMT-0400 (EDT)] [info] Adding shard metadata for f9e0dbc7e50f8a403742d4ae919366ce0befdf37 to frame
[Sun Jul 24 2016 18:15:59 GMT-0400 (EDT)] [info] Adding shard metadata for f58c5fc584228d2e0aad299c48b66670e0b91621 to frame
[Sun Jul 24 2016 18:16:02 GMT-0400 (EDT)] [info] Shard transfer completed! 8 remaining...
[Sun Jul 24 2016 18:16:02 GMT-0400 (EDT)] [info] Trying to upload shard /tmp/9f79900f6613 index 23
[Sun Jul 24 2016 18:16:03 GMT-0400 (EDT)] [info] Hash for this shard is: 960637f9947a66a4e29a5c4eb20d955df914563f
[Sun Jul 24 2016 18:16:03 GMT-0400 (EDT)] [info] Audit generation for shard done.
[Sun Jul 24 2016 18:16:03 GMT-0400 (EDT)] [info] Waiting on a storage offer from the network...
[Sun Jul 24 2016 18:16:03 GMT-0400 (EDT)] [info] Adding shard metadata for 960637f9947a66a4e29a5c4eb20d955df914563f to frame
[Sun Jul 24 2016 18:16:26 GMT-0400 (EDT)] [info] Adding shard metadata for f9e0dbc7e50f8a403742d4ae919366ce0befdf37 to frame
[Sun Jul 24 2016 18:16:26 GMT-0400 (EDT)] [info] Contract negotiated with: {"protocol":"0.7.1","address":"188.234.92.94","port":41940,"nodeID":"fcf2ec708ba47cddd91309b4a8b325e84ccab177","lastSeen":1469398562666}
[Sun Jul 24 2016 18:16:27 GMT-0400 (EDT)] [info] Data channel opened, transferring shard...
[Sun Jul 24 2016 18:16:32 GMT-0400 (EDT)] [info] Shard transfer completed! 7 remaining...
[Sun Jul 24 2016 18:16:32 GMT-0400 (EDT)] [info] Trying to upload shard /tmp/ce8a7eb257df index 24
[Sun Jul 24 2016 18:16:32 GMT-0400 (EDT)] [info] Hash for this shard is: bf4df10bbf4f163558e6041ffd701eb2b4452ae1
[Sun Jul 24 2016 18:16:32 GMT-0400 (EDT)] [info] Audit generation for shard done.
[Sun Jul 24 2016 18:16:32 GMT-0400 (EDT)] [info] Waiting on a storage offer from the network...
[Sun Jul 24 2016 18:16:32 GMT-0400 (EDT)] [info] Adding shard metadata for bf4df10bbf4f163558e6041ffd701eb2b4452ae1 to frame
[Sun Jul 24 2016 18:16:35 GMT-0400 (EDT)] [info] Adding shard metadata for 6dfe6c40681c7bf49c719821f3cb74100fbd38ab to frame
[Sun Jul 24 2016 18:16:35 GMT-0400 (EDT)] [info] Contract negotiated with: {"protocol":"0.7.1","address":"149.202.218.123","port":4230,"nodeID":"fc31c60b7502d16e5535cd9d4267e003ef3ea310","lastSeen":1469398587409}
[Sun Jul 24 2016 18:16:35 GMT-0400 (EDT)] [info] Data channel opened, transferring shard...
[Sun Jul 24 2016 18:16:38 GMT-0400 (EDT)] [info] Contract negotiated with: {"protocol":"0.7.1","address":"foximulder.awinet.at","port":4000,"nodeID":"6c1b3495e0a8025986067682528f22db26b24f15","lastSeen":1469398596022}
[Sun Jul 24 2016 18:16:38 GMT-0400 (EDT)] [info] Data channel opened, transferring shard...
[Sun Jul 24 2016 18:16:42 GMT-0400 (EDT)] [info] Shard transfer completed! 6 remaining...
[Sun Jul 24 2016 18:16:42 GMT-0400 (EDT)] [info] Trying to upload shard /tmp/2d166a768784 index 25
[Sun Jul 24 2016 18:16:42 GMT-0400 (EDT)] [info] Hash for this shard is: d238ffc3345b5c9ef3c9555a0e90489c97f770e6
[Sun Jul 24 2016 18:16:43 GMT-0400 (EDT)] [info] Audit generation for shard done.
[Sun Jul 24 2016 18:16:43 GMT-0400 (EDT)] [info] Waiting on a storage offer from the network...
[Sun Jul 24 2016 18:16:43 GMT-0400 (EDT)] [info] Adding shard metadata for d238ffc3345b5c9ef3c9555a0e90489c97f770e6 to frame
[Sun Jul 24 2016 18:16:43 GMT-0400 (EDT)] [info] Shard transfer completed! 5 remaining...
[Sun Jul 24 2016 18:16:47 GMT-0400 (EDT)] [info] Adding shard metadata for bf4df10bbf4f163558e6041ffd701eb2b4452ae1 to frame
[Sun Jul 24 2016 18:16:48 GMT-0400 (EDT)] [info] Contract negotiated with: {"protocol":"0.7.1","address":"80.237.71.105","port":5220,"nodeID":"d0656e0cce1cf5f489362be46c3eb778e76210e8","lastSeen":1469398603700}
[Sun Jul 24 2016 18:16:48 GMT-0400 (EDT)] [info] Data channel opened, transferring shard...
[Sun Jul 24 2016 18:16:49 GMT-0400 (EDT)] [info] Adding shard metadata for d90a3b7757d634a29be5bf5330dfa46c9573e683 to frame
[Sun Jul 24 2016 18:16:50 GMT-0400 (EDT)] [info] Contract negotiated with: {"protocol":"0.7.1","address":"86.9.7.31","port":4001,"nodeID":"bd2b3e49a2b5981ed4de65d76c022784c648b5d8","lastSeen":1469398609625}
[Sun Jul 24 2016 18:16:50 GMT-0400 (EDT)] [info] Contract negotiated with: {"protocol":"0.7.1","address":"37.201.162.189","port":4000,"nodeID":"d1e63b73f87f1a29176b94bd139bf617a57a9b6f","lastSeen":1469398610078}
[Sun Jul 24 2016 18:16:50 GMT-0400 (EDT)] [info] Data channel opened, transferring shard...
[Sun Jul 24 2016 18:16:51 GMT-0400 (EDT)] [info] Data channel opened, transferring shard...
[Sun Jul 24 2016 18:16:58 GMT-0400 (EDT)] [info] Shard transfer completed! 4 remaining...
[Sun Jul 24 2016 18:16:58 GMT-0400 (EDT)] [info] Adding shard metadata for 14332349303c47f5bc959e1031fcecb47d737635 to frame
[Sun Jul 24 2016 18:17:03 GMT-0400 (EDT)] [info] Adding shard metadata for 960637f9947a66a4e29a5c4eb20d955df914563f to frame
[Sun Jul 24 2016 18:17:04 GMT-0400 (EDT)] [info] Shard transfer completed! 3 remaining...
[Sun Jul 24 2016 18:17:28 GMT-0400 (EDT)] [info] Adding shard metadata for 14332349303c47f5bc959e1031fcecb47d737635 to frame
[Sun Jul 24 2016 18:17:29 GMT-0400 (EDT)] [info] Adding shard metadata for 960637f9947a66a4e29a5c4eb20d955df914563f to frame
[Sun Jul 24 2016 18:17:30 GMT-0400 (EDT)] [info] Contract negotiated with: {"protocol":"0.7.1","address":"78.157.164.38","port":4120,"nodeID":"1d419ad97dce6b97f78d6c2f59e051034ab8fa97","lastSeen":1469398650277}
[Sun Jul 24 2016 18:17:31 GMT-0400 (EDT)] [info] Data channel opened, transferring shard...
[Sun Jul 24 2016 18:17:38 GMT-0400 (EDT)] [info] Shard transfer completed! 2 remaining...
[Sun Jul 24 2016 18:17:41 GMT-0400 (EDT)] [info] Shard transfer completed! 1 remaining...
[Sun Jul 24 2016 18:17:53 GMT-0400 (EDT)] [info] Contract negotiated with: {"protocol":"0.7.1","address":"bcurtis-router.homenet.org","port":4000,"nodeID":"95d0a0da13be8a835a774c2c6d4d584e2526f0b5","lastSeen":1469398652002}
[Sun Jul 24 2016 18:17:53 GMT-0400 (EDT)] [info] Data channel opened, transferring shard...
[Sun Jul 24 2016 18:18:01 GMT-0400 (EDT)] [info] Shard transfer completed! 0 remaining...
[Sun Jul 24 2016 18:18:01 GMT-0400 (EDT)] [info] Transfer finished, creating entry...
[Sun Jul 24 2016 18:18:02 GMT-0400 (EDT)] [info] Cleaning up...
[Sun Jul 24 2016 18:18:02 GMT-0400 (EDT)] [info] Finished cleaning!
[Sun Jul 24 2016 18:18:02 GMT-0400 (EDT)] [info] Encryption key saved to keyring.
[Sun Jul 24 2016 18:18:02 GMT-0400 (EDT)] [info] File successfully stored in bucket.
[Sun Jul 24 2016 18:18:02 GMT-0400 (EDT)] [info] Name: [REDACTED], Type: video/mp4, Size: 233639244 bytes, ID: 57953e9ab4ef9cbf12d61261
So while storing 26 shards, here is the breakdown for repeated farmers:
Node ID | Shards |
---|---|
688bebae7b23c42fc2d042c485c12a88a126d6f4 | 1 |
fc31c60b7502d16e5535cd9d4267e003ef3ea310 | 4 |
29a0b5fe194ae2a3150b0d76fda962a5fc4c25e6 | 1 |
4fc1f379a4f16bbcfbe644de5fab3ebf06765a5d | 2 |
a5fc3518d535584bb5e0ec0b456ce53ebfb88af7 | 1 |
b0f82d3de7f0d6e51e9b3f1f7bce53af8883a438 | 1 |
3d8f5195f45f98a97135074e0556dcb5958607ae | 1 |
4ebb2a1258d8a6abab5b48353e55bb913d87c84e | 2 |
d0656e0cce1cf5f489362be46c3eb778e76210e8 | 2 |
d589017bde2bc5bb2978d3407bcd4eaaebbeab12 | 1 |
70201a823d9e622af168f58ea6c54059b940c7d4 | 1 |
b2c94877301c36dd76d81c1f683458b1250ff9cb | 1 |
1d419ad97dce6b97f78d6c2f59e051034ab8fa97 | 2 |
ef28179632549c53122c1c5c29a19bbc20d7e2ed | 1 |
fcf2ec708ba47cddd91309b4a8b325e84ccab177 | 1 |
6c1b3495e0a8025986067682528f22db26b24f15 | 1 |
bd2b3e49a2b5981ed4de65d76c022784c648b5d8 | 1 |
d1e63b73f87f1a29176b94bd139bf617a57a9b6f | 1 |
95d0a0da13be8a835a774c2c6d4d584e2526f0b5 | 1 |
19 unique farmers were selected for that upload, out of a total of 75 connected to that bridge (give or take - I checked after upload by paging through GET /contacts?connected=true&page=N
). There are a couple of repeats and fc31c60b7502d16e5535cd9d4267e003ef3ea310
got 4 of the shards, but that's not very alarming either considering there will be fewer fxx...
nodes in that bridge's routing table than nodes closer to 869025a2c19c667adc13741ac7473b7908243949
(which is that bridge's node ID). In fact, there are only three fxx...
nodes connected right now:
That's our homeboy at the bottom and i suspect that he really is the closest farmer to those shards, but I admit I cannot do XOR in my head :stuck_out_tongue:.
This doesn't add up. Normal usage indicates proper behavior of the bridge. Additionally, as I've said before, the storj.io bridges that @phutchins maintains do not behave like the DoS-like nature that is being reported from the storj.dk bridges. I think we need to shift focus away from a possible bug in the protocol implementation, because I have been tearing through it all day and I can say with almost 100% certainty that it's working as designed (which is obviously to not DoS farmers).
@Th3Van, can you share the following:
storj --version
Somewhat related: https://github.com/Storj/core/issues/281
I believe based on the data and behavior the Bridge that @The3Van is running is uploading the same file over and over again. Because peers are chosen based on the hash of the data this means that Bridge would be hammering a specific set of nodes, rather then distributing it to the wider network. This would explain the DoS on certain nodes.
If this is the case that bridge just needs to generate unique data for each upload. We should also introduce a client fix to block this type of behavior as it is an attack vector.
Would it destroy anything to add time-of-day to the file contents when its hashed? A high accuracy timer would change this scenario. With even a single digit difference, a good hash algo will widely spread the results.
BTW I like this working hypothesis, and it definitely a vector of attack against the service offering.
I've also not seen a DOS attack in several days, but then again, I haven't seen many shards period.
@bookchin The renter can't see the PUBLISH messages. He will only see the data channel at the end. Even if the bridge is sending all PUBLISH messages to the same nodeID this node will only handle 3 contracts itself and relay the rest to other nodes and they are doing the same. For the renter it will look like different nodes accepted the shards but the bridge was spamming the same 3 nodes all the time.
I try to copy the 18:13 to proof that but the log is to big and at the end I am sure I will see the same result.
Server13 file upload log:
[Sun Jul 24 2016 19:02:40 GMT+0200 (CEST)] [info] Generating encryption key...
[Sun Jul 24 2016 19:02:40 GMT+0200 (CEST)] [info] Encrypting file "/tmp/Server013.storj.dk-00001-20160724170224386132343-430078054-78b12af5ea8d7a127fd03fb477f12a0a5ce463151007e0f0fd08c073ac602721"
[Sun Jul 24 2016 19:02:42 GMT+0200 (CEST)] [info] Encryption complete!
[Sun Jul 24 2016 19:02:42 GMT+0200 (CEST)] [info] Creating storage token...
[Sun Jul 24 2016 19:02:42 GMT+0200 (CEST)] [info] Storing file, hang tight!
[Sun Jul 24 2016 19:02:42 GMT+0200 (CEST)] [info] Trying to upload shard /tmp/3fb811383854 index 0
[Sun Jul 24 2016 19:02:42 GMT+0200 (CEST)] [info] Trying to upload shard /tmp/591e8ba065ab index 1
[Sun Jul 24 2016 19:02:42 GMT+0200 (CEST)] [info] Hash for this shard is: 925f1e667a40fd3181af3f205a3101ee1eea5f0b
[Sun Jul 24 2016 19:02:43 GMT+0200 (CEST)] [info] Trying to upload shard /tmp/288825ff129d index 2
[Sun Jul 24 2016 19:02:43 GMT+0200 (CEST)] [info] Hash for this shard is: 5308de4778cf155061f474f38758c429247300e2
[Sun Jul 24 2016 19:02:43 GMT+0200 (CEST)] [info] Audit generation for shard done.
[Sun Jul 24 2016 19:02:43 GMT+0200 (CEST)] [info] Waiting on a storage offer from the network...
[Sun Jul 24 2016 19:02:43 GMT+0200 (CEST)] [info] Adding shard metadata for 925f1e667a40fd3181af3f205a3101ee1eea5f0b to frame
[Sun Jul 24 2016 19:02:44 GMT+0200 (CEST)] [info] Trying to upload shard /tmp/49492a89b8bb index 3
[Sun Jul 24 2016 19:02:44 GMT+0200 (CEST)] [info] Hash for this shard is: 268fc0c835f9312c2decabcd855de7862456dd6c
[Sun Jul 24 2016 19:02:44 GMT+0200 (CEST)] [info] Audit generation for shard done.
[Sun Jul 24 2016 19:02:44 GMT+0200 (CEST)] [info] Waiting on a storage offer from the network...
[Sun Jul 24 2016 19:02:44 GMT+0200 (CEST)] [info] Adding shard metadata for 5308de4778cf155061f474f38758c429247300e2 to frame
[Sun Jul 24 2016 19:02:44 GMT+0200 (CEST)] [info] Trying to upload shard /tmp/c796fe9ff6cb index 4
[Sun Jul 24 2016 19:02:44 GMT+0200 (CEST)] [info] Hash for this shard is: b60f331ea4340e18e98336cb38b3a4ebd3eb07aa
[Sun Jul 24 2016 19:02:44 GMT+0200 (CEST)] [info] Audit generation for shard done.
[Sun Jul 24 2016 19:02:44 GMT+0200 (CEST)] [info] Waiting on a storage offer from the network...
[Sun Jul 24 2016 19:02:44 GMT+0200 (CEST)] [info] Adding shard metadata for 268fc0c835f9312c2decabcd855de7862456dd6c to frame
[Sun Jul 24 2016 19:02:45 GMT+0200 (CEST)] [info] Trying to upload shard /tmp/798149dc6d16 index 5
[Sun Jul 24 2016 19:02:45 GMT+0200 (CEST)] [info] Hash for this shard is: f347dc087109966ded7b396f8c3965e91f321467
[Sun Jul 24 2016 19:02:45 GMT+0200 (CEST)] [info] Audit generation for shard done.
[Sun Jul 24 2016 19:02:45 GMT+0200 (CEST)] [info] Waiting on a storage offer from the network...
[Sun Jul 24 2016 19:02:45 GMT+0200 (CEST)] [info] Adding shard metadata for b60f331ea4340e18e98336cb38b3a4ebd3eb07aa to frame
[Sun Jul 24 2016 19:02:45 GMT+0200 (CEST)] [info] Hash for this shard is: 058c099213224a9a455b463109aa26093cad215b
[Sun Jul 24 2016 19:02:45 GMT+0200 (CEST)] [info] Audit generation for shard done.
[Sun Jul 24 2016 19:02:45 GMT+0200 (CEST)] [info] Waiting on a storage offer from the network...
[Sun Jul 24 2016 19:02:45 GMT+0200 (CEST)] [info] Adding shard metadata for f347dc087109966ded7b396f8c3965e91f321467 to frame
[Sun Jul 24 2016 19:02:46 GMT+0200 (CEST)] [info] Audit generation for shard done.
[Sun Jul 24 2016 19:02:46 GMT+0200 (CEST)] [info] Waiting on a storage offer from the network...
[Sun Jul 24 2016 19:02:46 GMT+0200 (CEST)] [info] Adding shard metadata for 058c099213224a9a455b463109aa26093cad215b to frame
[Sun Jul 24 2016 19:02:46 GMT+0200 (CEST)] [info] Contract negotiated with: {"protocol":"0.7.1","address":"104.196.0.239","port":8517,"nodeID":"14e3ce912a58193fd5eebdcf6ccf76b452424d2e","lastSeen":1469379765655}
[Sun Jul 24 2016 19:02:46 GMT+0200 (CEST)] [info] Data channel opened, transferring shard...
[Sun Jul 24 2016 19:02:46 GMT+0200 (CEST)] [info] Contract negotiated with: {"protocol":"0.7.1","address":"subwolf.dyndns.org","port":4000,"nodeID":"d23f0dbfb8c2e6abc22b8598e15359b13a8e86bf","lastSeen":1469379766121}
[Sun Jul 24 2016 19:02:47 GMT+0200 (CEST)] [info] Data channel opened, transferring shard...
[Sun Jul 24 2016 19:02:47 GMT+0200 (CEST)] [info] Contract negotiated with: {"protocol":"0.7.1","address":"84.90.148.136","port":58149,"nodeID":"eb52de66baf2bc5890230d2e7a289ef264780db1","lastSeen":1469379766759}
[Sun Jul 24 2016 19:02:47 GMT+0200 (CEST)] [info] Data channel opened, transferring shard...
[Sun Jul 24 2016 19:02:49 GMT+0200 (CEST)] [info] Contract negotiated with: {"protocol":"0.7.1","address":"207.242.114.130","port":4000,"nodeID":"9ddd59a1c06904ff9cf5c291f243fd9590a4f1e3","lastSeen":1469379767939}
[Sun Jul 24 2016 19:02:49 GMT+0200 (CEST)] [info] Shard transfer completed! 51 remaining...
[Sun Jul 24 2016 19:02:49 GMT+0200 (CEST)] [info] Trying to upload shard /tmp/64636a3d0c2b index 6
[Sun Jul 24 2016 19:02:49 GMT+0200 (CEST)] [info] Hash for this shard is: cae562dbae2969c77e1e809ac2be3601e148461e
[Sun Jul 24 2016 19:02:50 GMT+0200 (CEST)] [info] Data channel opened, transferring shard...
[Sun Jul 24 2016 19:02:50 GMT+0200 (CEST)] [info] Audit generation for shard done.
[Sun Jul 24 2016 19:02:50 GMT+0200 (CEST)] [info] Waiting on a storage offer from the network...
[Sun Jul 24 2016 19:02:50 GMT+0200 (CEST)] [info] Adding shard metadata for cae562dbae2969c77e1e809ac2be3601e148461e to frame
@super3 I did not copy the full log but 30% PUBLISH is not possible even if it would be the same file all the time. 30% is a clear statement. A file upload splitted in a few shards. Every shard will send out 3 PUBLISH messages and 2 of them are going to the same 2 nodeIDs all the time. Number 3 is getting timeouts and a new nodeID will take the place. Lets take the same file all the time and split it in only 2 shards. The result would be a maximum of 15% PUBLISH messages to the same nodeID. (If they would be send out to the nodeID close to the shard hash)
That could be enough especially if it's the same file on all the boxes. Frankly even 15% would be enough. It's a 15% of a beefy server that never stops to probably a much smaller box. Will wait to hear back.
May be wrong but that is the only thing that makes sense right now.
Uh, just did some napkin math: Believe the bridge servers were measured to be pushing 300GB/hour of data - congrats on having a fantastic internet connection because if I did the math right, that is around 600mbit/sec. 15% (100mbit) would overwhelm almost every home connection, and if that required bi-directionality (e.g the farmer need to pass the shard off to another node), would also overwhelm most commercial internet connections (one of my clients just upgraded to a 60mbit pipe for their 150 employees).
It is interesting to sit and watch connection counts from bridges. Each of my nodes seems to have an affinity to different combos: For instance the VM gets MOST of its connections from 13, 17, and 23, biased with about 50% coming from 23. My other node mostly gets connections for 16, and 19 (when it sees anything at all, which has become depressingly rare).
So there are a few issues at play here. Judging by the upload output from @littleskunk, that bridge is using the vanilla client and each file (even if its the same data) will get encrypted with a unique key and iv, which will produce a different hash for every shard. So the issue is not with the file being the same.
Now with the rate of upload in perspective, and given the relatively small pool of peers (~75) at a average speed of 600mbps (which is 75MB per second?), split into 8MB shards, that's ~9 contracts published per second?
At that rate, that outlier above with the fxxx...
node ID that got 4 of the contracts becomes quite significant. Since nodes cluster around other nodes with similar node IDs, there will always be fewer dissimilar nodes in the routing table.
So now we aren't talking about nodes who are close to the bridge getting spammed, I think its the opposite. Nodes that are distant from high traffic bridges end up carrying the most load because they are selected from a smaller bucket. And since the shard hashes generated will be sufficiently random enough to pretty evenly cover the key space in the router, the contacts in these edge buckets (in the case above, only 3) will end up having the take the same load as buckets in the center which are generally full (20).
So now the questions become:
This also has problems like an "unlucky" node not getting as many contracts because it is not close enough to the bridge.
This has performance drawbacks due to more network chatter and (perhaps negligible amount) of memory consumption.
(I like option 2 as the best way to approach the issue. It may increase the number of contacts held in memory, but the overhead should be negligible. There would be a lot more network activity on startup, but should ultimately taper back to normal levels and in the cases like this one, dramatically improve.)
@littleskunk what are the two node IDs getting hammered and what is the node ID of that bridge?
Just to add to what I'm describing to illustrate better:
Let's say you have a nodeID of 8
, when you join the network, you ask for peers who are close to you (since this is all that really matter in a vanilla Kademlia DHT), this is what your routing table might look like...
0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | a | b | c | d | e | f
---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---
@ @ @ @ @ @ @ @ @
@ @ @ @ @ @ @ @ @
@ @ @ @ @ @ @ @ @
@ @ @ @ @ @ @ @ @
@ @ @ @ @ @ @ @ @
@ @ @ @ @ @ @ @ @
@ @ @ @ @ @ @ @
@ @ @ @ @ @ @
@ @ @ @ @ @ @
@ @ @ @ @ @ @
@ @ @ @ @
@ @ @ @ @
@ @ @ @ @
@ @ @
@ @ @
@ @ @
@
@
@
@
Now when we publish contracts based on the data hash, say we have a ton of data and in general the hashes produced are random enough to land an even number in each bucket. Nodes in buckets furthest from the publisher are fewer and therefore have to handle more contracts than others.
We need to do a more thorough and aggressive FIND_NODE iteration on joining the network so that all buckets are as full as possible.
Also, while not directly related to this problem, looking back over this kad design spec, there is this little bit:
In a large, busy network, it is possible that while a node is waiting for a reply from the contact at the head of the list there will be another communication from a contact not in the bucket. This is most likely for bucket B-1 = 159, which is responsible for roughly half of the nodes in the network. Behaviour in this case is unspecified and seems likely to provide an opening for a DOS (Denial of Service) attack.
(http://xlattice.sourceforge.net/components/protocol/kademlia/specs.html)
This warning is in a different context, but does illustrate the inherent uneven structure of the routing table.
@bookchin I don't think a aggressive FIND_NODE is needed for the bridge. The bridge will send out PUBLISH messages and they are relayed to not know nodes. As soon as they send an OFFER the bridge will store the contact information. I would expect a full contact list after a few hours / days even if the bridge would start with only one contact. (for the farmer it could be usefull but that was another issue)
Lets go on with the rest of your theory:
Server012 nodeID c0792c4c2dcac1582a6d01c93d1d366c5321d94e
First one hour log
https://github.com/Storj/core/issues/250#issuecomment-232157650
34% send to c5c230c448f0d2619e018d30204221f52b8fe35c
26.8% send to c340d4d6491b67d3699661c8a4ac3234d253081a
Second one hour log https://github.com/Storj/core/issues/257#issuecomment-233134417 This one is strange. I still have the full one hour copy but I did not check the nodeIDs and was focusing on the SUBSCRIBE messages. Give me a few minutes to look into it.
@bookchin
#!/bin/bash
CryptKey=AllYourSpaceAreSoon(TM)Belong2Storj
BucketName=Th3vanTestBucket001
MinFileSize=1
MaxFileSize=500000000
MaxConcurrentFileSends=10
find "/tmp/" -exec rm -R {} \; 2> /dev/null
echo
echo -n Waiting for Storj-bridge to get online...
while [ `storj list-buckets | grep -c "ECONNREFUSED"` -ge 1 ];
do
echo -n .$StorjBridgeOnlineCounter
if [ "$StorjBridgeOnlineCounter" -ge "45" ]; then echo "..Sorry, can't connect to storj bridge. Exiting..." && exit ;fi
((StorjBridgeOnlineCounter++))
sleep 1
done
echo ".ONLINE!! YAY!! :)"
if [ `storj list-buckets | grep -c "$BucketName" ` -gt "1" ]; then
storj list-buckets | grep "$BucketName" | awk -F ' |,' '{ print "Removing Bucket ID" $6; system("storj remove-bucket "$6) }'
fi
if [ `storj list-buckets | grep -c "$BucketName" ` -eq "1" ]; then
BucketID=`storj list-buckets | grep "$BucketName" | awk -F ' |,' '{ printf $13 "\n" }' `
echo Bucker ID for $BucketName : $BucketID
else
echo
echo Bucker ID for $BucketName not found - Creating..
storj add-bucket $BucketName 99999999 99999999 > /dev/null
BucketID=`storj list-buckets | grep "$BucketName" | awk -F ' |,' '{ printf $13 "\n" }' `
echo Bucker ID for $BucketName : $BucketID
fi
for Counter in {000001..999999}
do
TimeStamp=`date -u +%Y%m%d%H%M%S%N`
FileSize=`shuf -i $MinFileSize-$MaxFileSize -n 1`
openssl rand -out /tmp/tmp.file $FileSize
FileSHA=`sha256sum /tmp/tmp.file | awk '{ print $1}'`
mv /tmp/tmp.file /tmp/$HOSTNAME.storj.dk-$Counter-$TimeStamp-$FileSize-$FileSHA
echo
echo
echo "Uploading : $HOSTNAME.storj.dk-$Counter-$TimeStamp-$FileSize-$FileSHA - Size: $FileSize bytes"
storj -k $CryptKey upload-file $BucketID /tmp/$HOSTNAME.storj.dk-$Counter-$TimeStamp-$FileSize-$FileSHA &
while [ `ps -awx | grep -v grep | grep -c "upload-file"` -ge "$MaxConcurrentFileSends" ];
do
ps axh -O etimes | grep "$HOSTNAME.storj.dk" | grep -v "grep" | awk '{if ($2>=1800) system(" kill -9 "$1) }'
sleep 1
done
find "/tmp/" -mmin +32 -exec rm -R {} \; 2> /dev/null
ps axh -O etimes | grep "$HOSTNAME.storj.dk" | grep -v "grep" | awk '{if ($2>=1800) system(" kill -9 "$1) }'
done
storj --version
(it's the same version on all servers) :root@Server013:~# storj --version
2.0.5
Second log 47844 sending PUBLISH
grep 'sending PUBLISH' onehourlog.log | cut -d':' -f7 | cut -d'"' -f2 | cut -d'\' -f1 | sort | uniq -c > nodeids.txt
14 0009e83da9d138afc2e4a1f3119ecd6e3309c520
8 006ff08eb63be2b543ad2cd7c6998f7e210714f8
7 00c8a16cfd56ad831d56faa5ff273beaeee3702d
35 012cadc93d1ba459026019c5c2118747218bc693
25 02d7885d60ee045907e51d3a1e5a5f2a7845d2e5
52 02fdb49f08320056b0fdf5deef7cf78dfdc6dd4f
9 0396d23775db0ca5df3f85c76e1812dfeacd3049
16 03e6d54e2fd5ddbe71bf3fbb0bda9fd0e179a8dc
23 052e6070b14d991b89fd33f3f751851e71c9def7
48 059f9ee85790964dd71c5a009b5cbeee01b04aec
23 06f6b713bc251f4af3b0f8665ddd8457fb67cc3c
7 0871cadd8bac9315f26fa45dcf5d819bc72b191c
27 08be4038fadf70b037b02b890e28c76acd671efb
27 0abe850417b3226bb3f6af198fc6eb32d9d6ce37
69 0be1b16a60e37640d8aabc67090035829750cfda
29 0c23ebe6ddafc6df1a5fc0ab164db3af8a3f205c
39 0d448ff03be9bccfda93b3e111f6b59ac53f8b00
13 0f19d1b8765b5e189e941d8fc362bb222ee870b3
76 0f98d68b05fa1b0f00be1c45d8601657adee23cb
2 0fcf1e0fbc499f833bc09c89d971b492182f53db
15 0ffa306be0afaa347e0ecd21920c4415dc535f8a
53 1037e361a7a5f0ad918f45a551a9ed85105b3845
45 10e42e77ad75a6cef9a2ee16d92b45cea52a4764
7 10f175eb7842a046ca6301101c5e00ed2baeed87
24 114f841d58f4d33fd7d3fc179d7e73f7b0daa298
107 11cd576edf06474f3759baa0a17ad7e4fabd644f
20 1267187f84eeeb6fd4b2f1010cdb8f691d35664c
22 13e2c751a5534a88536cd92a5f41504fafbf8694
6 14c5420808726d39995345193325cdd4b6872207
4 158f4a8b3c9fd621a316a57559c9c4af00700e95
6 16669e23f871b9d1ea8a23f6bada0489f84d3f3b
4 17182a049e7672b54cccd266bd2158e99df3883c
24 175f699bbf05bcc0a5e4a1674096acf836d14c35
66 17e1becfb3484ef8889276316c549dfce491885d
15 18a7f85ff25607b98bd909c3ddcf6a972e06491e
44 18f3728101ef1aac4a542b522548e49502708800
53 193bc51f7e6cafdbcdce4e531f8c38d04f7db068
7 19c1c06d52acdcb2f22334441ec2dfb38738b6c7
6 19ce52abacb2fb76d5cd59aaa457f55b67589cc0
32 1c9be241580706f09c2d60a526c01dc9e068e7c9
8 1cad8cf0778f4da69d60d852ad1d6e19cfdafa6d
11 1e97250d1b8681bbdd00374c25930e5d84afd9ae
22 1f70fbce4d89119352642b4139eb295088916885
37 21cb833d3c9149627aeb3cffa338249694fa402d
9 22686f4cb3c4a2c6b161767b35ff088cfa7abdf8
24 228897ccce83bb0ad28ce7de09f8b59984294e29
19 22e3d96fc99c550ebae018faa15960422425fb04
20 256c5496e366a6474454d4f69eead171502deeae
26 26acf5e966c424e11a37029e566e7f8d9e508d8e
121 271ed989491dba88808a566da571a98993e2e433
17 27ca84598fc794ab52ecf813cacd831a1aba351a
19 29dff6268e24afe024ddcae4e574f41d4af23edb
23 29f576fdc7cc6f2ccd0cd58c1be4fafe159f6e9c
133 2a3d0a9e10dfc9e59d43f8eb9979bbd095d72013
17 34940117329cdb7cd830d2919c989541e98187b8
23 34c3c42955e989a9cc43ced91250372f99437678
196 364d0f5a80baa1d9668befa66f1b712761657ce7
96 376564c90da6d119ff6008b82fae3c58e6aa75c8
5 3874b20d473252c6028a2942fd45ac34bdeebfc0
24 3a700d46172a6bc12018fa63190c58837ef56ba4
37 3b395ff8ff82eb81f94b2fc12deae382bf35f15a
15 3d34b168fca23d429f75f820993ea29005d9e72c
50 403e7ebe55cb39253755019f3a77e078cec87fbc
33 43378fcce40462c7fc4d99aa5752aad4e0c7f16c
11 43af86b5889004ccef6f86ff424c020c17becddf
47 443f9e48133f01b715da0fd34fa257f74e756bf6
2 47be9a4e1c9f5524c54653e2ee8f27e94f76994f
22 48498e49d757e03d19ac81d2d8fea207818841ce
42 4bdc3ec200fd79e83b0518036ee265b6293d386f
37 4c1a4adcbb8c28f0a57e1498be289f7cfb6df8f4
21 4cd5559d133c8cc5d00b4f65e2843ee97db59886
1 4d46bf5168d74e46e90838d7a52fb28b88a5e7f1
1 4e53c990bc89b89deca953eb538737fcc3099868
63 4ebb2a1258d8a6abab5b48353e55bb913d87c84e
7 511352867099378fe6781bcf67a772d6227b94ed
33 5202c35ee94896c922efb9e689ce7a8454579a24
57 52b49d39eee0e179374c6747746ad699f01a1ce2
30 557f93f6b4ba0d5fc716e5f0de05c8b5dd720743
39 55f2a9c7c1578e8cac64210c09c56ae4f3c1c1a9
24 5613845b822c654c60d35467e1f935f5abec38e3
19 562f28edd5827847b8a30ef0fbf85260c21c515d
10 57a627bf250d1a3d4ed3ae2a184a9607b3a49706
70 57bef769d166e480d96ea00ebbd9ac000d4812ce
79 58fe91c2b2409f3d29743a8ceb78bd332a8e3254
29 5913a356f9aae535119e406d835c7c6daefa03f2
52 593844dc7f0076a1aeda9a6b9788af17e67c1052
15 59d0672be3e44e1a3a84732a228bc79a9b58437e
77 5b5576f1e15e1fe6091c9d2371ae8a9ad449a983
15 5bb3849b16cce4092be331d452f9e7d28930583c
76 5bcee17a5a27ec03d641a3b93bc3d92de21c04a6
6 5c77f543167bcae9b9023c9170a0a3d14d8821ee
141 5e5f83285aeb79777ab1a993a4e536a3062b293b
1 60049deb51ad6f225b3d87a861c598bd28c35803
62 613a20fb181673080a6aa127fa3fe949554ec8fa
47 63a62572b1a203fa1347fb5141d281cfd3b42d08
47 65b88e3821bb512b6a2609701d7d4c6670421906
45 66a5c857e7049bea5d52c89024a4af4a46f4b19c
16 6735793f6e1041678aee205901fa7dc6548a10f6
107 6844d551ae5791b1948816556bf2e10f40a87881
18 6a05111d5ec131d12805f72969d15021e2f0ea10
2 6abeb6391492f091c9c2cb1fbe83b776bffcfae0
20 6b8df84443967627eefe8cbde4a7d643282c3786
31 6c5647380067af774bab1fce33e66ddc7789908b
47 6d0a177a65e2dca3ddf1055bb1160cd1250e7e11
22 6d4baeb9d8406397e765a753069c3b6257d69c81
16 6dd28bf69a595e58eae9a04e3fa368025292df32
15 6ed825724b7233ebc1031e815b89d58a19388fde
4 708df8246c55325167bd95b76599a606a6391c67
46 70b731e6be92504973d509476b2046b0422560c4
126 7446ac56fe3d76d4881015db1b5267f62c6f7f10
7 74e5ccf05bb544ea145983849cbd54e12e953189
4 75b5ae0d7cb3ba95cebd7fed1a8e1b38b1b65a50
45 776d88bc07362a3aa9f84564b400e7d6be136443
2 77e9af6d82f29482ffb4f7b575aa5b4184861991
52 7abf82acef4889e4d7e62d82deda5a9b088b12f9
77 7bd5b092e877340ce2cd185ebefef6e33087cda1
28 7dd5caf2942f52e7eec6b45b25ee10b78a3ac3d8
89 7fe86fc5940045db6767c08cd8c34280f497a5ca
25 80445f5422213d73b2a2fa11a6cbd29daf86e91c
17 8260345d73af2a73dca08cadceadc5d9c408b3d3
14 82edbfbdd1261022fe8ba95bc0c04de3c61aacc9
223 8383958051b6faabc0f9f0a2d92dac4a3cc69072
75 8521546c19886e04ec9d1e8632c3497fb5d49b29
61 869025a2c19c667adc13741ac7473b7908243949
3 8930943884683040f7667f898b849863ca1be8e5
17 8c1ff804e608078c9a25ce14e29d7ee939afb903
33 8c6d5f0693527dfd02b54c1045516acebf375fe3
78 8e826ac4f0a334c72d4bf3975597d26870d0ab7c
39 8e9faf8703aac0cd5389a58cc6c602e782b9fef0
11 8edf32dda0a5b709fe2461417cd2f188db62f690
78 8f01122ecd5a0e117d4d674f5fda569b0d707653
62 8f60a25be7c248b020e29b161db683ceb4a21940
27 923af22e148d271a4853f16b3198c79448d2a1c9
19 92b4d3ce10d0993cc78a407d2faa3e942d4429ba
56 9311d1f677a4eeb5904e815e260a8e3d3a1ba5c8
16 96e1e3092903d1fe6d11f2af144007e126be308b
7 96e96184beab482bfff1560fad2c32f2b663572f
23 9a6fac6e984b6d08639d76ca2210200031688dc9
73 9ba28c18efa5a672b1aa9c220f64a4f47cf14e90
17 9c985cc1e2ef8c97047c1f63bf1f17799b632dd5
100 a0eb1c0ab37be8eb010fb8eb334e017ea0d62202
35 a18e0f6ac5904fafd2a68e7f05bf8b5412d745dd
52 a22ecfa5251dc6c3060e0affc7b4e68c75a985f4
12 a5b8b3af70623465b7bc045795d6848f36bfeb77
92 a7534e8279615cb6d298df79899fa5601f86f282
16 a7e7d3507e14c770d3b4dd4c7c0e399d2ee5427d
30 a880d6bf6defbc977d46931a77409f8852721cec
11 a9f4706cb0887eb5c88a1e3503eb42b2d7895be4
30 ab1844b11dbb112d3ed5fda161c20cc366d0a66b
109 ab917c96eccbe4f951841c6ffa99a34095feebdb
13 ac225eab0f4e66674a28dd94ca7eacd99645230a
94 ad86bef696d498f6c91514fe997968df0d0c8fc1
25 ad9d0e00a6bbee6f14e141d265080a6baa18e118
7 af115d0a5e04080abca42c3484c696fe8baef6c9
20 afc215a54d32372a5cd35b3effafa8b2d4d76a95
29 b0f82d3de7f0d6e51e9b3f1f7bce53af8883a438
5 b1fc7d8ae7a962fbbbbab3c6c8ed9f9f349a219e
15 b24f026f21ea9af17c68b1cee1a6cf5e5c36831d
39 b2c94877301c36dd76d81c1f683458b1250ff9cb
31 b408aeb67291e4d1994a95394e674c09eb683d3c
31 b74c5d3513dc60f92f01d9da063b7f962ae23b87
52 b797bf502d3b8077f6a4a2db1a56011466ad7d68
4 b7a4d85fb7b4a1d3627f15504550193e0921a86e
15 b8dea102d16d41ba1c20f4bf23908c528aa8b0fc
11 b9ff536a3a750c9ae6c30b5ea3c5c9d70915ef80
26 bae7509bf9c338ceb771be714cded8e3a576a039
42 baf47f2bf2198c35c221aa61b6aacd440e220bfb
5 bb5e6f37f4d3345978634bf521c4284bbee42233
23 bc5854804c4a28395abe5fb6248ec8571e2d65f2
57 bcf52c85c9c41e37e833615a4a5b28f7eefb4cd7
17 be987c0149f89b21b57c66293bb6bfca695ded32
458 c08fd9d8659c8b9db7d589b1b8dca546259fcd52
3961 c260cbad55e30aaa5583c06b567a0deefc9e9e03
3083 c340d4d6491b67d3699661c8a4ac3234d253081a
1423 c345f159f410ff137deacd53e0b734d6edee8677
1208 c502365175cd94af3079f6852f73c8b2abc95dcd
2818 c6447d2aac3ad5fcc0977f46f302e21c93dcf341
1563 c6c6292880ad49fe52e049293f45f27245bb77d9
4247 c70103efd636ee59cceb8b701114a9172cea58f0
4444 c774873edb6d98d6672d6e87bb3b2b12fb162945
4409 c7e8e45d741092ff84b73178d13a4e6a5b68349d
1860 c88a06df286651b18efd2e67496d31eca6d7f419
1579 ca1f258e640be38459aa3082273d43befb839808
357 ca96a70d929f269233b37c1dbdd9891a510d548c
279 cb514e281febdd17016681b49086a03a6b1a5cdb
11 cc2890e00bd123e3b78bab52bae6464d7a285e42
3276 ccd14fb218fd645ca22c1272d2a054c306b05f43
71 ce18954e27b7b746330c6f3d7ca5ae64c7a2ef1c
3 ce86f5c56c4892ec9d8cdd6955079e10aa639ff3
82 cf5a9c4aa7a72ecb57cf3cb15244d1b18d5fad87
3093 d0656e0cce1cf5f489362be46c3eb778e76210e8
45 d110403785ff3096db9547e2360552270781e2b0
85 d12fc5387f2b93e133581c00992ade19f7ea5e74
4 d252ec580497cf3649c767463429693a11fb9eb4
3 d518b7a5a1de5db1b934ee810af102dc458aec8a
3 d589017bde2bc5bb2978d3407bcd4eaaebbeab12
26 d7a1147c7398413bed31ab9a3145ac842e1b50b2
12 da2e0fd632eed06894e025f3ec72e14e8d2ed0dc
10 db460235141cb54e5630c23a2483e9c617e9434a
29 db9a8f708a68dd224789824dac988b233b7e22ea
3 de2c331c77a39f0512a782543cd2740bd2f8d01a
119 de900ca48453858913b3b368d7d685fc8143e910
13 de9d0a6d17dd847ea40108b349be39dcc683fd65
9 dea332db096c3c7cb87755e5b5a1aec23916c465
6 df0473d0121711364391fea48842bc1301a3f6be
7 df67d87663c72b3e56403c0dc83182e07e988a01
28 dfbd227a5af9a100b00e958a5f74dc2d7ad6e958
515 e27db9baaa0842756a142e85607d316293f9115c
58 e2e6393c96ac725de1f328534c377b9e6c92f174
13 e4e8b91b80993d006a12030312afcf6b3ed32454
12 e4f8962be5443c06198d2621d95afc239a852d7b
10 e6df4c0e897dfabe63ea2236b9e9be32141f55bc
1142 e91ab770182c3bdb329a569205a13d144e90c187
23 ea44b38cfd7767e2bbc4578891d2d3b0e49bad5c
15 ecf2caa039020005e8a4d0d8b880c2c96ddef0e6
464 eec18b1c29cf7f6d4948e3edce69ecb97dddb678
28 ef28179632549c53122c1c5c29a19bbc20d7e2ed
34 f21c5c830b401e217a0b43bd91ca85ce5417753f
97 f4a2759835e7d5b6787709ba16011c6d3c5d4ba1
79 f5bcd3040322241be9f0f395ef21eb97d34d930d
14 f6546cfa6018a2b68a820cb0587438c05b71c47e
32 f74f3eaae7874e483175a407ce5c7704136e74cb
6 f86b6df3c0fa2b9dde7f5c9248a57ec0003858b7
23 f8ee04fe093e6cd4c84e4a3e20d5b064c67efd31
271 f979d027ceaacd5e7e2969471b93cd6b5cb40953
6 f9e352ca9401f3c7e394bd6e12fc3743ef0c2728
129 fb6bb723ed3c33d32bfc1f05e0e19c7d79019d62
34 fc31c60b7502d16e5535cd9d4267e003ef3ea310
83 fcfb509255ff1ce36ba4b98b6e4c28a4c83c903c
11 ff4cf2865a925a2d9b9c3b679f9a174a02204090
Nodes close to the bridge would look like this (64618 SUBSCRIBE messages still follow this rule)
1 8383958051b6faabc0f9f0a2d92dac4a3cc69072
1 a0eb1c0ab37be8eb010fb8eb334e017ea0d62202
4616 c08fd9d8659c8b9db7d589b1b8dca546259fcd52
4806 c260cbad55e30aaa5583c06b567a0deefc9e9e03
5726 c340d4d6491b67d3699661c8a4ac3234d253081a
2241 c345f159f410ff137deacd53e0b734d6edee8677
1679 c502365175cd94af3079f6852f73c8b2abc95dcd
4607 c6447d2aac3ad5fcc0977f46f302e21c93dcf341
2448 c6c6292880ad49fe52e049293f45f27245bb77d9
6691 c70103efd636ee59cceb8b701114a9172cea58f0
5718 c774873edb6d98d6672d6e87bb3b2b12fb162945
6129 c7e8e45d741092ff84b73178d13a4e6a5b68349d
2015 c88a06df286651b18efd2e67496d31eca6d7f419
3114 ca1f258e640be38459aa3082273d43befb839808
623 ca96a70d929f269233b37c1dbdd9891a510d548c
318 cb514e281febdd17016681b49086a03a6b1a5cdb
3825 ccd14fb218fd645ca22c1272d2a054c306b05f43
80 ce18954e27b7b746330c6f3d7ca5ae64c7a2ef1c
2365 cf5a9c4aa7a72ecb57cf3cb15244d1b18d5fad87
4739 d0656e0cce1cf5f489362be46c3eb778e76210e8
83 d12fc5387f2b93e133581c00992ade19f7ea5e74
24 de2c331c77a39f0512a782543cd2740bd2f8d01a
17 de900ca48453858913b3b368d7d685fc8143e910
10 dea332db096c3c7cb87755e5b5a1aec23916c465
899 e27db9baaa0842756a142e85607d316293f9115c
1270 e91ab770182c3bdb329a569205a13d144e90c187
413 eec18b1c29cf7f6d4948e3edce69ecb97dddb678
7 f4a2759835e7d5b6787709ba16011c6d3c5d4ba1
150 f979d027ceaacd5e7e2969471b93cd6b5cb40953
3 fb6bb723ed3c33d32bfc1f05e0e19c7d79019d62
Difference between the first and the second log was the nodejs version of the bridge. The first log was with nodejs6 and the second with nodejs4. This or any other update changed the PUBLISH nodeID distribution. It looks better now.
@bookchin The contact list looks ok. The bridge is holding ~20 nodes in each bucket. To many SUBSCRIBE messages (https://github.com/Storj/core/issues/257) but they have the expected distribution.
I can now calculate the number of tunnel PUBLISH vs contract PUBLISH. SUBSCRIBE distribution:
4616 c08fd9d8659c8b9db7d589b1b8dca546259fcd52
4806 c260cbad55e30aaa5583c06b567a0deefc9e9e03
5726 c340d4d6491b67d3699661c8a4ac3234d253081a
2241 c345f159f410ff137deacd53e0b734d6edee8677
1679 c502365175cd94af3079f6852f73c8b2abc95dcd
4607 c6447d2aac3ad5fcc0977f46f302e21c93dcf341
2448 c6c6292880ad49fe52e049293f45f27245bb77d9
6691 c70103efd636ee59cceb8b701114a9172cea58f0
5718 c774873edb6d98d6672d6e87bb3b2b12fb162945
6129 c7e8e45d741092ff84b73178d13a4e6a5b68349d
2015 c88a06df286651b18efd2e67496d31eca6d7f419
3114 ca1f258e640be38459aa3082273d43befb839808
To many tunnel PUBLISH (https://github.com/Storj/core/issues/258) but they have the same distribution
458 c08fd9d8659c8b9db7d589b1b8dca546259fcd52
3961 c260cbad55e30aaa5583c06b567a0deefc9e9e03
3083 c340d4d6491b67d3699661c8a4ac3234d253081a
1423 c345f159f410ff137deacd53e0b734d6edee8677
1208 c502365175cd94af3079f6852f73c8b2abc95dcd
2818 c6447d2aac3ad5fcc0977f46f302e21c93dcf341
1563 c6c6292880ad49fe52e049293f45f27245bb77d9
4247 c70103efd636ee59cceb8b701114a9172cea58f0
4444 c774873edb6d98d6672d6e87bb3b2b12fb162945
4409 c7e8e45d741092ff84b73178d13a4e6a5b68349d
1860 c88a06df286651b18efd2e67496d31eca6d7f419
1579 ca1f258e640be38459aa3082273d43befb839808
At this point I would say between the first and the second log something fixed this issue. Most likely it was the correct nodejs version. Now we have a perfect distribution for Contract PUBLISH messages. Close this issue and go on with https://github.com/Storj/core/issues/258 and https://github.com/Storj/core/issues/257?
It looks like the bridge is sending 30000-42000 tunnel PUBLISH messages. The best case would be ~42000 tunnel PUBLISH and ~5844 contract PUBLISH messages for 1838 shards.
New issue for the 42000 tunnel PUBLISH messages https://github.com/Storj/core/issues/283
-.-
Package Versions
Newest on 2016-07-09 storjshare -V
StorjShare: 4.0.5 Core: 1.4.0 Protocol: 0.7.1
Expected Behavior
bridge node do not spam my node, making it DoS
Actual Behavior
server013 - actually only this one makes more than few connections / sec to my node making it use 100% CPU on main process and in real blocking getting any shards - practically confirmed. Blocking it with iptables instantly gives shards and CPU usage drops by far.
Log included, this is happening all time, log is only from few seconds.
Steps to Reproduce
Get connected to server013 and wait some time :-)
s13.txt