Open haraldrudell opened 9 months ago
Thanks for the report. The goal of that limiter is to set a limit on the number of concurrent connections from a certain IP or the various subnets. They are indeed hard-coded, which isn't great.
But I'm mostly wondering why you're running into the limit with 10 concurrent connections. That should never trigger the limit of 30 per connection from a single IP. This is the snippet that increases+checks the limiter, then schedules its decrease and handles commands:
[...]
if !limiterConnections.Add(c.remoteIP, time.Now(), 1) {
c.log.Debug("refusing connection due to many open connections", slog.Any("remoteip", c.remoteIP))
c.writelinef("* BYE too many open connections from your ip or network")
return
}
defer limiterConnections.Add(c.remoteIP, time.Now(), -1)
// We register and unregister the original connection, in case it c.conn is
// replaced with a TLS connection later on.
mox.Connections.Register(nc, "imap", listenerName)
defer mox.Connections.Unregister(nc)
c.writelinef("* OK [CAPABILITY %s] mox imap", c.capabilities())
for {
c.command()
c.xflush() // For flushing errors, or possibly commands that did not flush explicitly.
}
}
I can only guess that older connections (beyond the 10) are still stuck in the command loop.
Is STARTTLS used on the connection? In that case, perhaps the c.conn.Close of https://www.xmox.nl/xr/v0.0.9/imapserver/server.go.html#L687 is hanging for a while when c.conn is a TLS connection, https://www.xmox.nl/xr/v0.0.9/imapserver/server.go.html#L1477. I think that if a client and server of a crypto/tls connection both close at the same time they will wait for each other for 5s before aborting. See this note in the smtpserver/server.go: https://www.xmox.nl/xr/v0.0.9/smtpserver/server.go.html#L277
I did some quick tests with a local mox and 30 netcats (plain text, no STARTTLS). And the behaviour was as intended: Only no new connections allowed while there were 30 connections open. Once I close a few down, I can make some new connections again.
Could you check if some older connections are staying open? It should be possible to match "connection closed" log messages to the original "new connection" messages based on the cid.
The root cause of this was a bug in my client’s LOGOUT code
A large number of ESTABLISHED or TIME_WAIT sockets was listed on the mox host:
netstat -ntp | grep :993
However, it is likely that some yahoo will want to use more than 30 connections per IP address
— mox needs ability to disable the connection limit
Maybe instead of disabling the connection limit for such cases mox could support killing the oldest established (and probably stuck) connection to allow the new one to go through?
While that could be an option for a limit per exact IP address match, it wouldn't be great for the subnet matches. It may mean someone with another IP can get your connection dropped. Did you run into the limit without accidentally keeping too many connections open? I suppose we can make the connection limit configurable. But as an admin I wouldn't like to disable it.
— The connection limits are measured for all time and limits the total number of connections per server lifetime that can be made from ip and some CIDRs — This can apparently not be configured but is hard coded — For 10 threads opening at most one connection at a time per thread overtime a sequence of multiple connections exceeding 30, triggers at per-address set limit 30 which likely means connection limiting by source address is not working as intended — at no time is there more than 10 parallel imap connections, yet the limiter triggers — ie. mox stops accepting any connection
Issues: — the total allowed number of connections per server lifetime should not be limited — the limit must be configurable and have ability to be disabled — the IP CIDRs are arbitrary
mox/imapserver/server.go:732
c.writelinef("* BYE too many open connections from your ip or network")
mox/imapserver/server.go:120
Connection limits are hard coded per server process lifetime: — per ipmasked1 (ipv4 /32 or ipv6 /64) 30 — per ipmasked2 (ipv4 /26 or ipv6 /48) 90 — per ipmasked3 (ipv4 /21 or ipv6 /32) 270