Closed MarkoZabcic closed 5 days ago
@MarkoZabcic it looks like you have a lot of non-TLS traffic reaching your proxy instance on the TLS port. Do you know what the source of that traffic is?
The proxy will be rejecting those invalid requests (hence no requests to your application). But depending on the volume of traffic it's dealing with, it will still consume some CPU. What's the rate of non-TLS requests that you're receiving?
@kevinmcconnell you are right. The VPS is brand new 2 days old so I wasn't expecting any external requests yet.
What's the rate of non-TLS requests that you're receiving?
Can you advise me what is the best way to export this?
Here is network info:
root@kamal2:~# ifconfig
br-4062c4accee9: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 172.18.0.1 netmask 255.255.0.0 broadcast 172.18.255.255
inet6 fe80::42:f6ff:fe4a:a7 prefixlen 64 scopeid 0x20<link>
ether 02:42:f6:4a:00:a7 txqueuelen 0 (Ethernet)
RX packets 96517158 bytes 5132155300 (5.1 GB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 102237492 bytes 15636841029 (15.6 GB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
docker0: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500
inet 172.17.0.1 netmask 255.255.0.0 broadcast 172.17.255.255
inet6 fe80::42:89ff:fec2:74f6 prefixlen 64 scopeid 0x20<link>
ether 02:42:89:c2:74:f6 txqueuelen 0 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 9 bytes 966 (966.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 159.69.55.179 netmask 255.255.255.255 broadcast 0.0.0.0
inet6 2a01:4f8:c013:61bf::1 prefixlen 64 scopeid 0x0<global>
inet6 fe80::9400:3ff:febf:645 prefixlen 64 scopeid 0x20<link>
ether 96:00:03:bf:06:45 txqueuelen 1000 (Ethernet)
RX packets 104789183 bytes 18002856710 (18.0 GB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 99087864 bytes 6811039359 (6.8 GB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
inet6 ::1 prefixlen 128 scopeid 0x10<host>
loop txqueuelen 1000 (Local Loopback)
RX packets 8436 bytes 1012426 (1.0 MB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 8436 bytes 1012426 (1.0 MB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
veth2eee821: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet6 fe80::98ba:c2ff:fe15:8bd6 prefixlen 64 scopeid 0x20<link>
ether 9a:ba:c2:15:8b:d6 txqueuelen 0 (Ethernet)
RX packets 34250313 bytes 2283689563 (2.2 GB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 36258790 bytes 5538481371 (5.5 GB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
vethdb1ef6e: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet6 fe80::1080:4eff:fe66:a073 prefixlen 64 scopeid 0x20<link>
ether 12:80:4e:66:a0:73 txqueuelen 0 (Ethernet)
RX packets 452 bytes 337733 (337.7 KB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 486 bytes 102368 (102.3 KB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
root@kamal2:~# docker network ls
NETWORK ID NAME DRIVER SCOPE
ff1e05bb3f48 bridge bridge local
3e0dcf4c816c host host local
4062c4accee9 kamal bridge local
25e4984570d1 none null local
root@kamal2:~# docker network inspect kamal
[
{
"Name": "kamal",
"Id": "4062c4accee9ba00ac2a166986c171892cc55189c5caa4cb41379fd8614dd822",
"Created": "2024-10-02T19:41:41.112935774Z",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": {},
"Config": [
{
"Subnet": "172.18.0.0/16",
"Gateway": "172.18.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"77e6ef13c478e3d96077f26a2987e804a464da164a242cc5dd0e521bfb9a5965": {
"Name": "r8-web-d17e883235c7d01ef68fc15f90fa73a379f98c6c",
"EndpointID": "80aad3c19b42076cb7abd26cfeaee97d76c0893d63a7204694ef07bd19d9041d",
"MacAddress": "02:42:ac:12:00:03",
"IPv4Address": "172.18.0.3/16",
"IPv6Address": ""
},
"ce53432810a7054d48c6cfa81bc12bf976014de34bd23fee6020d8d08ff3c347": {
"Name": "kamal-proxy",
"EndpointID": "c70dad43300fed677d5f29baed8b38a2ca23552016cdef67f53f567c3b1919f8",
"MacAddress": "02:42:ac:12:00:02",
"IPv4Address": "172.18.0.2/16",
"IPv6Address": ""
}
},
"Options": {},
"Labels": {}
}
]
Here are graphs from Hertzner:
For something that's supposed to be idle, the network load does seem high. For comparison, here's one of my instances that is running the proxy but is currently mostly idle:
You could try sampling a section of the logs: count how many "handshake error" lines you have between two timestamps, and use that as an estimate of the rate. I'm not sure if there's anything wrong with the proxy here, but it would be good to sanity check that the CPU load seems reasonable for the rate of requests it's dealing with.
To actually solve your problem though, I wonder if you'd be as well trying to switch to a new VPS. Maybe you were unlucky to inherit an IP that was previously getting lots of traffic from something? I haven't seen this pattern on my own Hetzner instances yet -- I usually do see bursts of bot traffic trying to break in, but it's usually not very much, and it only shows up periodically.
It appears that I've been unlucky with the IP.
Here is data for period of 1 minute:
root@kamal2:~# docker logs kamal-proxy --since 1m > slice.log
root@kamal2:~# echo -e "TOTAL: `wc -l slice.log`\nERRORS: `grep 'TLS handshake error' slice.log | wc -l`"
TOTAL: 13071 slice.log
ERRORS: 13071
That is around: 217,85 requests/sec
And all of them resulted with 'TLS handshake error'. So there is 0 requests to port 80 or any other valid requests.
If i do:
# local machine
curl -I 159.69.55.179:443
HTTP/1.1 404 Not Found
I get same output in logs:
{"time":"2024-10-04T10:59:55.5971099Z","level":"INFO","msg":"http: TLS handshake error from 33.44.333.111:44776: client sent an HTTP request to an HTTPS server"}
So kamal-proxy responds to any request to server, and not just to configured domain 'essential.effectiva.studio'. Could solution be to quickly discard any requests that are not to going to configured domains and skip logging?
But if this was real production server what would be our options in this case?
Just out of curiosity....
I did kamal remove
and installed nginx
and certbot
directly on host to see how it behaves.
nginx cpu & memory utilization:
tail -f /var/log/nginx/access.log
93.110.206.213 - - [04/Oct/2024:13:45:34 +0000] "!\xDAkoWDq\xBC\xC6\xA1G\xD3\x87\xA8\x1C\x10\x13\xF3\xC6H\xEF\xC7ain\xB9\xC9j\xFA\xF7\xC2\xBD\xE3\xF2\xC8\xBC!\xD4\x9Fn\xC5\x1F\xC7\xF7\xAA\x1D\x8BYx\x94\xA3\xFB\xF8\xB8\xAE\xB9\x924Rl\xFB\x12\xFD}P\xD0ks\xFC+!\x16; \xEFg\x9BWz\xE7\x8D\xDDp\xD9\xF3\xF2\x8A/\xC5\xA1\xDA\x8B<m\xEE\xA6\xD0\x1A\xE4\x8C\xB40\x07\x912\xEB\x8Eb\x95\xB9\x22\x97#\xF1\x08\x90\x98j\xFD\x0E.k+\x04\xF3\xC9\xC2hu\x13\x8B9[\xAC'88\x9B4\xE5\xD6V\xDF!\x90\xF2-\xBF\x11\x5C\xA4\xA5\xD8%\xCB\xD38" 400 166 "-" "-"
164.215.218.245 - - [04/Oct/2024:13:45:34 +0000] "\x1D\x96\x8D\x8B\xAC\xFEz\x8E\xFAZN\xF9+$*\x015\x06\xCF\xC7\xEF9?re\xD4\xDA\xB0R\x8D\x84\xF8\xB7S\xED\xFF" 400 166 "-" "-"
5.117.105.234 - - [04/Oct/2024:13:45:34 +0000] ":,\xB6U\xC8\x1Db\x19V\xECJzL\x19\x86\xDB\x9B\xFF\xE4\x95\xA6F\xAF+a\x0C\xF9\x89\xDD+\xFE\xA4u+?\x9C\x1D\xED+bA\x97\xF8\x19\xCE7\xF2\xCA\xC7OMR\xEFSB\xA7u[\x1F\x7F\xE1\xF1" 400 166 "-" "-"
5.114.179.91 - - [04/Oct/2024:13:45:34 +0000] "\x07-\x02\xD6%c\x87\x8A\xCF\xC6D\x90\x87\x98tD\xBD@\xB5\xFDU\xAA\x9Ao\x93bX\xC8\x04\xAAQZV\x03\xC62\x8F\x8E\xE7^\x1A\xA1\x88\x1B\x95\xE2\x87\x81Z\xAA\x16\xCB4I\x97`8\xA7o\xC6H\x92+\x8A\x9E\x1F\x8B\xBA\xFC%\xC2l\xBE\xE6\xA8W[\xD6\xF7\xD7}\xD8" 400 166 "-" "-"
178.131.150.135 - - [04/Oct/2024:13:45:34 +0000] "\x07\xA6Ov\x14\xC8kg\xF5F\x81\x10\xA0se%\x00\x83U\x02\x84\xCD\x11\xD1\xC4Xm\xB0X\xDF\xE5\xBAc\xF9\xC1\x97UV\xB7d|\xD3R\x85\x15\xD11 \xAC]\xF6\xE7\xF2\xB0\x8A\x11\x8EJ\x88\xE2c4\x8E#\x87\xFB\xE0\x80\x16\xE8=\xC9\xFDP\xD8TB{g\xBAP\xDB6)\xED\xF1L\x90\x87\xEAsN\xC8^\x1B1\xA1\xE0b\x0B~\xA3z\x86N\x9A\xB8\xF7\xA9\xF9\xD4\xA5\xBD\xE8\x00*\xE4\xB0\x91" 400 166 "-" "-"
5.121.74.90 - - [04/Oct/2024:13:45:34 +0000] "\x18 \x1E\x1C\x1By\x83\xA2\xAA\xB6C\x03\xA6a/\xA2,;'x\xA6(\xB5w\xD4\xD8\xE6\xF5A\x0F\x0F" 400 166 "-" "-"
142.132.185.236 - - [04/Oct/2024:13:45:34 +0000] "$\xD4]\x0C\xD9\xD2\xE4\xF0\x92T\xA1\x9FC\xC46\x04\xCF\xF2\xE1\xAC-\xCA\x95]B\x1F\xA1\xD5\x98\x8BBQ\x9D \xE3" 400 166 "-" "-"
5.122.191.220 - - [04/Oct/2024:13:45:34 +0000] "R\x8A>\xF0\xB0\xB1\x83\xDE+\x13\xA4\xE0\x97,\xC1`\x87\xA8\x05\xF9h\xD9\xD4\x94\x17y\x87P\x81+|g\x1C \x09;\xDF\x19oU\x90\xCFW#9z/\xB3R]/\xFC\xED\x8C$u,\xE3\x1DYEk:\xEC\xB3\xC4\x97T\xAF\xB0S\x13\xB5_S\xD4hd`a\xD0p\x14\xE0\xD8\x81.b\xA55(T\xEE.D" 400 166 "-" "-"
2.147.101.211 - - [04/Oct/2024:13:45:34 +0000] "l?\xD9r\xFE-\xCE2R \xFCH_\x8E\xE4\xFAP\x1Ap\xC4\x00\xA6\xF4)\x02\x16)." 400 166 "-" "-"
157.173.127.229 - - [04/Oct/2024:13:45:34 +0000] "\x05\x10\xB5\xC6>P\xBC\x11\xBC\xF6\x86\xA5\xB6\x13\xB4\x92\x22K" 400 166 "-" "-"
50.114.208.46 - - [04/Oct/2024:13:45:34 +0000] "8@\x85\xA8\xD7\xB7\x03\xA7\xED\xFF\xE9\x00\xDCp\xCF:\x8D\xFC" 400 166 "-" "-"
86.57.42.219 - - [04/Oct/2024:13:45:34 +0000] "c\xBDj\xF7\xEFQ\x82Bz5\xE2Q'\xDEV\xB0\x05\xC9\x03Y\x1B\xEB\xD3\xBDP\xAF\x0C\x03d\x8C.I\x97\xB8\xC6\x88=\x99R\xBB\xB5\xE0\xF2\xDD\xDA\xCAC\xD8\xC4v:\x9F#BZ$RdY=\x8C\xB2J\xCB\x95\x90\x82\x9BH4\xEE(\xD4\xBA\xB8O\xB3\x07w|\x1E\xC5\xCC\x06\xC3\xD7\xDA\xA5\x14\xE6\xDB\x01\xD1\xC0\xC8\x08\xE1\x1CZ\xCD\xFA\x16\x8E\xC9\xB7c\xDE#bH\x8C}\xB0\x98\xF0y\xCB\xE0\x96ca\xF4\xD2_\xAE\xB7,2|\x9Am\x07\xA8E\xA3\x95" 400 166 "-" "-"
188.40.85.153 - - [04/Oct/2024:13:45:34 +0000] "S\xBD\xFCj-\xC9\x9F\xFF\xA0\x9Bw\xF7\xFA!\x0C\xAAc\x16\x22\x0E~\xCCPQ\xF7>\xEF5 N\xD6\x9C\x09C\x7F\xAF7\xBB_e(r\x1B\xA2bf\x8A/\xFE4z\xBC\xDD\x02\x1D\xBF\xB8\x8D\x9B\xE5&\x11\x9Br\x9A\x02c\x15\x06 !\x89\xDA\xB7h\x1A\x91\x96P\xB7\xFCtx\x82h\xB3g+\x80\xC1\xBA>\xB62\xA2" 400 166 "-" "-"
141.148.229.210 - - [04/Oct/2024:13:45:34 +0000] ";2\xF9TQf\xA0W\xB9r}eh\x04\xADA`X~\x10\xF1=|\xC7\xCA2\xFAk*wL\x19U \xDF\xFA\x1D\xF2X\xAE\xAE\x0C\xEC\xFC\x1A\xE5\x05\xFA\xE2F\xF8\x1D\x8CJ\x87\xB8v\x19\xD5\x1CG\x07\x84a\x9A_\xB1|\xFD\x13\xB3\xF5\x00\xD3\xCDQ v\xED\x87_\x18\x12\x06\x85\x8A\x9D\xD6Ig\x95\xA9\xD7\xE3\x11\xB89\xF5' \xED\x86F\xAC\xA6\x81b\x9C[\x8C\x14\x08\xDB\x09A\x0F2'" 400 166 "-" "-"
89.196.138.112 - - [04/Oct/2024:13:45:34 +0000] "g\x97A3\xE8\xAD{\x11\x85\xAA\xDA\xD5\xA06C\x039\xF58\xBC$\xE1\x065\xE0S\xF9Z>\xDA\xCB\x0E\xE6B\xE4\x10)\xAE\x0CLI\xA4\xE8\x88\xF5l\x87?\x8A\xD3\xBB\xCAI\x0E\xBC\x18T\x94\xC8\xF0\x9ER|\x9DJ\x97;\x172\x98O\x94\x8F\x18h\xFD\x9E\xAC+\xE1\xD7\xD2H\x94\x8E4\xDA\xF9\x1D\x22\xF1+\x0E\xEB\xDF\xBC\xEE\xC6[r\x134\xF5\x96s\xC9\x1D\x87u\xAE\x8E\x8Cp\xDD9|\x947.\x1A\xAD\x822S4\xE1ni\x0B\x82\xF5z\xFA\x9F\xB1\xD1\xA5\xF8\x01\x94\xA9\xD3\xAA*v\x0FJ\xE6I\x95\x1D\xA8\xBBU\xCAPk\x90,\xDFH\xCC\xCBT\x8A\xD9\xCA\x04\xA4\x8F^\xD4\xB9\xDB\xABH\x04\x1E^\x0E\x0E\x9Dl" 400 166 "-" "-"
5.125.222.101 - - [04/Oct/2024:13:45:34 +0000] "\x5C\xA7y\xCB-\xCEg\xA5\x12\x81\x22\x19\x1Cs\xBE{\x96\x82\x9E\x1C\x87\x02\xF08\x1D\x12\x8FnW\xA3\xA7!\xFA\xAA\xF6\xD9F\x89\x11\xA9\xCEB\xC2\x82\x05\x1B\xEC\xBBx^Q\xA5H\xE4\xE6.0U\xBC!l\xCE\xA0*" 400 166 "-" "-"
5.123.71.161 - - [04/Oct/2024:13:45:34 +0000] "zo\xD5\x14\x89\x0E\x22\xF6aW\xAB\x98|\x98\xC3\xDEX\xDCy\x02\xD3\xB1+\xA8\xCD\x09\xBAd\xA0\x91\x11s)&\xF1\xE7\xE6\xA8\xD3\x9F\xA1\xFAZ\x80\xE1\xFE\x89\xBE}FS" 400 166 "-" "-"
172.233.45.165 - - [04/Oct/2024:13:45:34 +0000] "\x7F\xF8\x9B<~\xD8\x95\x19;\xE1\xA7\x02\xA9\x8Eb\x91\x19\xC3\xCF\xC1j@\xEB\xB6\xE9\x95\x17\xD1\xC4\xEC\xE0R[E\x22(N\xDF\x97\x93p\x17<\xAB\xCEl\xB3\xC6|\xCC\xD5bd\x90o\x13&\xD5[\xCA\xAE\x0F\xC8\xB6\x01\xF0:\xF6\xBD7\x8BeU\xEE7\x7F\x8E\xF3yQ\x12\xC8Qcx\xC8\xC1M|<@\x9E\x94\x9A\x9E$v\xF5\xF7B\xEE\xA7\xAE\xD2\xDAc\x0C\x1C\xFC\x0B\x86\xE6\xD3\x82\xA4\x9CA9a\xE4Y\xCC\xABj\x91\xE3\x10h\xEB\xAD\xF7LUH" 400 166 "-" "-"
5.117.91.226 - - [04/Oct/2024:13:45:34 +0000] "1Q\xAE\xA6\xEA\x09\xA5\xE18\x02\x17R\xDF,\xCF\x86\xB6~n\xE6\xCF\xEB\xC8\xAB\xDD\x16\x0C\x146\x03\xBE!0\xD2\xDD/\xB2\xE7\x7F\x9D\xFF\xBF4\x9B\xC6\x0F\x04\x19Y`\xEDd\x0B\x8Eo\x1A\xCEve%8\xBCb\x17\x5C[k;X\xABWIv>\x9E08H\xDE\x18n\x10\xF0\x223Y\xFE\xF2\xD8;5\x9B\xDF%\xC2\xA2u\xE5\x03\xA8%\xBD\xA6x\x88$\xC2\xF5;\xBD\xFA\xEA\x82\xB58\x98*\xE8\xD0\xAC=6\x17\xAA'\xB6\x11\xD9\xE1\x98[\x9DGY\x83;V\x99v\xDC\xF0B\x1E\x94C\xE6p\x8B\x9Ff1\xB4\xF1\xA8\x8E\x07q\xD2\xC9\xCB\xBD9\xD1:~>a\xA9\xA5\x9C>\x11\xDB\xDD>\xD7D\xE0)\x81\x1A#\x1Du\x7F\x85q\xA3\xA0k\xB7\xBB~\x007\xF5\xA7\x91#\xF7\x14\xADdE\xF2\xD8\xD8\x0E\xB2\xD7\x88\xBB\xA9\xD8#\xC5y\xF1\x84:\xC1T\xEA\x8Bs\xE3\xA9[6\xFFUi\xC1\x0E\x19y\xD0&\xCF\xE1\xEC\xEF\xD2\x0F\xD1T\x11\x5C\xF0&\xE4X\xE7Og\xAEty%\xDCYf\x0FHqZ\x05j\xCC\xEAY\xA0F\xA8\x1C\x8D\xC5\xFD\x0F\xAD\x1C\xE8\xBB\xE7\xF3\x96D\xD1?\x89J\xC5\x0FT\xF6/sC){\x22\xB6\xCCr\xF8'$\xBFN\x88t\xD5\x22\xEE\x92\xB1T\xA5\x9E\xF2\x84\x10\xDDA\xA5E\xF6!\xAA\x98I\x0B\xF8,2\xC4\xF6\x86\xC1T/\xBD\x9B\xBB\xEAf\x09N\x82T\xDC\xEA\xE3\xDB\x93\x83\x8BD\xDEeG\xA2\xC2*\x92\x22Z\x9F\xED\xD7G\xADI\xBF\xFD\xA1\x98" 400 166 "-" "-"
As I understand nginx does not have any issues with TLS handsake
root@kamal2:/etc/nginx/sites-enabled# vnstat -tr 30
56802 packets sampled in 30 seconds
Traffic average for eth0
rx 1.11 Mbit/s 981 packets/s
tx 744.90 kbit/s 911 packets/s
I don't think that much can be done here so you can close this issue.
@kevinmcconnell If you see opportunity to analyze or debug something as things like this will happen in real world I'm glad to help.
I've noticed that kamal-proxy in idle with 0 requests to application is using around 12% CPU. The logs are just endless stream of
TLS handshake error
How to reproduce:
OS Host: Ubuntu 24.04.1 LTS
rails new r8 --master
app.example.com
with real hostdocker stats --no-stream
docker logs --tail 10 -f kamal-proxy
Here are all the changes in deploy.yml: