Closed mkrecek234 closed 2 months ago
Is this a permanent issue or does it occur after the mailcow stack is running for some time? I had a similar issue where logins were fast after a fresh start of the stack, but started to take several seconds after some time. In my case this was related to the innodb buffer pool of the mysql db. See details here:
https://dev.mysql.com/doc/refman/8.0/en/innodb-buffer-pool.html
After adjusting the values for innodb_buffer_pool_size, innodb_old_blocks_time, read_rnd_buffer_size and some other the issue went away.
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs.
Unfortunately, it also is so extremely slow after a fresh restart, so probably no buffering issue...
I have recently noticed a similar problem, ~but mainly when logging in with FIDO2 (fido2_login)~. Logins can take a good 20 to 30 seconds.
My problem is that I cannot find any logs in connection with the login. In the mailcow UI logs, I can only see that the login was ultimately successful, but not how long the login took.
Any help would be appreciated...
EDIT: Issue is not limited to FIDO2 login and happens without 2FA as well...
EDIT 2: My impression is rather that only the first login takes so long and every further attempt after that works faster again in the short term. Exactly the other way around, so to speak, as described by @beerlao .
I can confirm your observations, I'm still experiencing the problem even after fine tuning the database. The symptoms:
First login takes ages and sometimes runs into the timeout of the reverse proxy. After the first login, every subsequent login into the same account is fast. At least for some time. If there is no login for some time for this particular account, logging in is slow again. This problem seems to only be related to the web frontend, not to sogo and not to imap. I suspect the retrieval of the last logins that are shown in the interface to be slow after some time.
My awful workaround is running the following cronjob every 5 minutes for the affected accounts:
curl -X GET http://localhost:8088/api/v1/get/last-login/$USER/7 -H 'Content-Type:application/json' -H 'X-API-Key:$API_KEY'
This one logs into the account and ask for the last 7 logins.
I can agree 100% with the symptoms described. What surprises me the most is that the problem only seems to exist for some users and not for everyone.
I also have this same issue, but only affects 1 user account. Deleting the user account and recreating does not fix it.
I have the same issue. I did some tracing and it turns out that when getting /user, 150 requests are made to https://dfdata.bella.network/lookup with an average duration of 90ms each (13.5 seconds total). This seems to be coming from this line: https://github.com/mailcow/mailcow-dockerized/blob/36b5cccd186090d726de62b6b00d1e842e67aacd/data/web/inc/functions.inc.php#L287
I am not familiar with the mailcow codebase, but I assume this is used to get the country of origin of each IP shown in the login history. This would also explain why some users have this problem while others don't.
EDIT:
I tried del IP_SHORTCOUNTRY
in Redis in an attempt to reproduce the issue with a recently logged-in account, it worked.
This further indicates that this is where the problem is coming from.
Deleting the login history fixes the issue (temporarily).
When deleting the history, connection time goes from 10-25s to 100-200ms.
What I don't understand for now is that a separate call to the json API on /api/v1/get/last-login is supposed to get the login history, this isn't part of what's returned by /user so this shouldn't slow the request down.
So, On my side the issue comes from: https://github.com/mailcow/mailcow-dockerized/blob/36b5cccd186090d726de62b6b00d1e842e67aacd/data/web/inc/header.inc.php#L52 Which declares the last login as a global variable. This in turn calls: https://github.com/mailcow/mailcow-dockerized/blob/36b5cccd186090d726de62b6b00d1e842e67aacd/data/web/inc/functions.inc.php#L254 Which, for each IP used to login in the last 7 days tries to get the cached location in Redis, and calls https://dfdata.bella.network/lookup if the value isn't cached. At first glance, removing: https://github.com/mailcow/mailcow-dockerized/blob/36b5cccd186090d726de62b6b00d1e842e67aacd/data/web/inc/header.inc.php#L52 seems to solve the issue without having any obvious adverse effect. The corresponding variable: https://github.com/mailcow/mailcow-dockerized/blob/36b5cccd186090d726de62b6b00d1e842e67aacd/data/web/templates/base.twig#L149 doesn't seem to be used anywhere.
A double-check would be welcome, I'd like to know if last_login
can be deleted in header.inc.php
(and in the twig template) or if I'm missing something.
While I dont know if I should remove last_login, I went ahead and removed it. The single user account I had issues with now will login instantly. I did notice when going to the 'App_Passwords' tab or the 'Temporary email aliases tab' it does have a loading screen for about 30 seconds now. But thank you @PierrePlt for the tip. Im leaving it removed for now unless I see any adverse side affects.
I guess the delay when clicking on another tab comes from the call to api/v1/get/last-login that still takes a while even though it's not blocking the loading of the main user page anymore
If you don't need to see the flag next to each IP address I guess you could remove lines 285-311 of functions.inc.php
https://github.com/mailcow/mailcow-dockerized/blob/36b5cccd186090d726de62b6b00d1e842e67aacd/data/web/inc/functions.inc.php#L285-L311
That's not an acceptable long-term solution, but if the loading time is too big of an issue that should fix it for now
I have the issue that I move though many different wifi networks throughout the day at university, and my device gets a different IPv6 Addresses every time. The same also happens when moving though different APs in my home. Because of this the login history is absolutely flooded with unique IPv6 Addresses and the login/ loading of the history and being able to actually use the page/ other tabs something takes multiple minutes which is really annoying.
Would it be possible that at least only the /64 IPv6 Network gets looked up and cached? Most likely all of the IPs in one /64 are always from one country or even one customer. I also agree with #5888, not only because it would solve the loading issue, but also because of the privacy reasons.
Contribution guidelines
I've found a bug and checked that ...
Description
Logs:
You can see that there are 8 seconds between awaiting_tfa_confirmation and verified_totp_login. This issue also exists if there is no two-factor login what I can see with other logins.
Steps to reproduce:
Which branch are you using?
master
Operating System:
Ubuntu 22.04
Server/VM specifications:
AMD EPYC 7282 16-Core Processor, 6 cores, 16 GB
Is Apparmor, SELinux or similar active?
no
Virtualization technology:
None
Docker version:
24.0.7, build afdd53b
docker-compose version or docker compose version:
1.29.2, build unknown
mailcow version:
2024-01
Reverse proxy:
Apache
Logs of git diff:
Logs of iptables -L -vn:
Logs of ip6tables -L -vn:
Logs of iptables -L -vn -t nat:
Logs of ip6tables -L -vn -t nat:
DNS check: