Closed Diniboy1123 closed 2 years ago
Have you tried building from head instead of 0.7.0?
No, I haven't yet. Unsure whether master is ready for production yet...
What speeds were you expecting to get? Have you tested speed in each direction separately? Downstream to the client should be faster.
No powerful machines should be needed. I would expect the big servers like 1.1.1.1 to throttle more, so I would suggest to try with your ISP DNS as well. If the traffic isnt restricted you can also give the iodine server as DNS server IP for speed comparison.
Can you post the logs from the client when the tunnel is set up?
I was expecting a bit more downstream speed than 20 kbit/s. Around 100 kbit/s would be absolutely enough.
And I tried with my ISP DNS as well where I also experience the same speed. Though the idea of using the server IP directly is really smart, I will definitely try it.
Gonna post logs in a bit too, just need to replicate my setup first, thanks for the ideas.
Alright, so I did some testing. Directly using the remote server that runs iodine as the DNS yields in the best results so far, reaching around 170 kbit/s. I have done this testing from an unmetered and as per promise not throttled gigabit home network to WAN. However what I saw is that there were certain spikes when the data was flowing, but the stream often hung for a few seconds with 0 kbit/s traffic. Then it was working fine again and again nothing etc. I am starting to think that maybe the remote server is being throttled, maybe its ddos protection is turning on or unsure. I don't see any errors or warnings in the iodine logs though. Will see with a different VPS provider.
And I did some testing with my ISP's DNS too. Here is the first output: dns log #1 Max speed was around 17 kbit/s here, also with some 0 kbit/s hanging sometimes, but overall the speed was constantly that much.
Then I tried to reduce the -m flag to 240: dns log #2 Surprisingly the speed was around 6-7 kbit/s here and really often 0 kbit/s. I don't see why as I didn't decrease the -m value so drastically much.
Finally I tried to use TXT records: dns log #3 That was definitely the worst from all tries. 1-2 kbit/s speeds at its best. :thinking:
Closing since this is not a bug. As mentioned earlier I suggest you to try the latest code.
Hi,
I am using 0.7.0 from iodine, the host is a quite powerful aarch64 machine running Fedora and its not on the same network, not even so close (within EU borders, but in two different countries), but I get an average ping of 48 ms without the tunnel which is quite decent. Raw mode is disabled and lazy mode is enabled.
I am getting speeds of ~20 kbit/s and that's where it caps out. I tried lowering the
-m
value which didn't help much, but made it even slower. I was thinking it could be the bottleneck of the DNS resolver, but I can reproduce the same issue using 1.1.1.1 or any other public resolver from my unrestricted gigabit home network.I also tried running dnsperf and hammered my DNS resolver with 10k reqs over 10 seconds and they all went thru except like 16 reqs. I was trying to request the same thing what iodine tried as well, a single TXT query. I can see that the network interface is actually under good use and there is around 4 Mb/s traffic on it. If I try running iodine and the benchmark at the same time, the benchmark seems to run fine with also not many failed reqs, however iodine is slow as usual.
In the stdout output I am seeing some SERVFAIL: server failed or recursion timeout and NXDOIMAIN: domain doesn't exist errors from time to time.
Can I do something about these? Seemingly I don't get as many errors from iodine if its in legacy mode, but then its even slower.
Thank you so much