Closed tuminoid closed 1 year ago
I get a similar thing every so often. In each case resolution the following day has then been fine. This is on a 1GB Linode.
For example:
From: elm.dhpiggott.net <administrator@elm.dhpiggott.net>
To: administrator@elm.dhpiggott.net
Subject: [elm.dhpiggott.net] Status Checks Change Notice
Date: Wed, 16 Dec 2015 06:37:56 +0000 (UTC)
elm.dhpiggott.net -- Previously:
================================
✓ Reverse DNS is set correctly at ISP. [139.162.194.112 ↦
elm.dhpiggott.net]
elm.dhpiggott.net -- Currently:
===============================
✖ Your box's reverse DNS is currently [Not Set], but it should be
elm.dhpiggott.net. Your ISP or cloud provider will have
instructions on setting up reverse DNS for your box at
139.162.194.112.
And then:
From: elm.dhpiggott.net <administrator@elm.dhpiggott.net>
To: administrator@elm.dhpiggott.net
Subject: [elm.dhpiggott.net] Status Checks Change Notice
Date: Thu, 17 Dec 2015 07:04:01 +0000 (UTC)
elm.dhpiggott.net -- Previously:
================================
✖ Your box's reverse DNS is currently [Not Set], but it should be
elm.dhpiggott.net. Your ISP or cloud provider will have
instructions on setting up reverse DNS for your box at
139.162.194.112.
elm.dhpiggott.net -- Currently:
===============================
✓ Reverse DNS is set correctly at ISP. [139.162.194.112 ↦
elm.dhpiggott.net]
:+1: can confirm this problem
+1 Same here. I was thinking it had to do with Digital Ocean.
I have a forward DNS problem like this. One of my mail-in-a-box domains has a homepage on Google Sites, and MiaB is informing me that Google is changing its A-record every day. Every day.
Using external DNS for now, haven't looked into it yet, not sure if it's related.
Since a lot of people seem to use Digital Ocean, the VPS where I see the problem is hosted by Vultr.
I have the same issue on AWS
Same issue on Hetzner
I'm also seeing this issue on Vultr. I've got both ipv4 and ipv6 reverse DNS set and so the message sometimes varies in that either the v4, v6 or both reverse addresses claims to be [not set]
I'm getting these emails constantly on DigitalOcean and Ubuntu14, is there any setting to turn these off?
I haven't gotten this in a while. Anyone else?
It appears to have stabilized, I never see it anymore.
Brett Elliff
Written in smoke, translated by Warlock, and sent by carrier pigeon.
On Feb 19, 2016, at 6:50 PM, Joshua Tauberer notifications@github.com wrote:
I haven't gotten this in a while. Anyone else?
— Reply to this email directly or view it on GitHub https://github.com/mail-in-a-box/mailinabox/issues/628#issuecomment-186470506.
Haven't seen it for a while, too.
Am 20.02.2016 um 06:00 schrieb Gatewayy notifications@github.com:
It appears to have stabilized, I never see it anymore.
Brett Elliff
Written in smoke, translated by Warlock, and sent by carrier pigeon.
On Feb 19, 2016, at 6:50 PM, Joshua Tauberer notifications@github.com wrote:
I haven't gotten this in a while. Anyone else?
— Reply to this email directly or view it on GitHub https://github.com/mail-in-a-box/mailinabox/issues/628#issuecomment-186470506.
— Reply to this email directly or view it on GitHub.
Neither have I. I last got one on January 3rd, followed by a correction on January 4th.
Aaaand I spoke too soon. My DO box just did it again.
I had this for a while, but it hasn't happend for a couple of months now. Weird that it started happening again for you @JoshData
This happens to me nearly daily, any way of turning the emails off?
I have created a PR to test a possible solution to this issue: https://github.com/mail-in-a-box/mailinabox/pull/739
This version will try to find the authoritative server for the address first and use that server to do the PTR lookup. After doing some reading on the matter I think this might solve the problem. It however doesn't happen al the time.
I would appreciate it if somebody could give this a try.
The PR doesn't solve the issue. See PR for details, I will try and add some logging since I can now somewhat "reproduce" it.
I am also getting this on both my personal vultr vps and on a VM hosted on a dedicated server I rent with So You Start (a branch of OVH). Both have the ipv[4,6] reverse set correctly so I have to think that something is just timing out in the world that is the internet and the status checks just happen to poll at the wrong time.
I am also using Hetzner, but I don't have this issue, both ipv4&6 and set, and status check haven't reported any issues for last month..
That's probably because it's a smaller company, but I need something state side for me
I'm having this issue with HostEurope every few days - looks like the DNS request runs into timeout or something else.
because i got this also for the secondary dns server.
I am experiencing this issue currently. Does anyone know of a fix?
Same issue on DigitalOcean. I get the emails sometimes everyday to weeks in between them. I remember when refreshing the admin console, it would flip back and forth a lot, could it be just chance that the email happens to be sent when it's in the wrong state? Would it be possible to check multiple times before sending?
This checked multiple times. It's a problem with the reliability of the reverse dns servers.
This thread has been surprisingly silent during last few months, although I've been receiving these emails almost on a daily basis. Is this issue fixed for everyone else? Could #743 be considered for merging?
For me the issues went away. I started hosting in a second data center with a different dns server. Those boxes have the same problem. I reopened the PR.
I just set up a box to test, and was very surprised to start hearing that the reverse DNS was not correct. I'm running on DO myself, and running management/status_checks.py --show-changes
seems to be having problems 1 in 5 times or so. Meanwhile, I have yet to see dig in ptr 190.31.xx.yy.in-addr.arpa @ns1.digitalocean.com. +short
fail.
I don't know if it's relevant, but I'm not serving DNS from the box—I'm using an external provider.
I get these mails every other day on Vultr too, box seems unaffected but the constant status emails gets annoying after a while
Just to keep this thread going. I am hosting on AWS and receive these reverse DNS email notifications often. As with other on this list, the situation seems to report a correction the next day. Although no changes to the DNS have been made.
I have recently migrated from a Leaseweb physical server to a VULTR VPS. Prior to the move I had never seen this warning. After the move I see it every few days.
I keep having this problem ever since I moved some of the services behind some of my domains to AWS. There, the load balancer works based on a DNS name, and the backing IP keeps changing ever so often:
https://stackoverflow.com/questions/3821333/amazon-ec2-elastic-load-balancer-does-its-ip-ever-change
I can see two ways out of this:
Let me mark domains that I do not want to host on Mail-in-a-box as "mail only". This has the added advantage of cleaning up the status checks within the web ui as well, which make it hard to find actual status information that is relevant for me now, in between all the superfluous warnings.
Optimize the diffing algorithm to not care if the IP changed, as long as the status of the domain remains the same ("invalid"). I.e. don't just do a simple text diff.
Here it is, the end of 2019 and I've been using MIAB for years. It's always sent me these emails then corrected itself and sent another email a day later. It's the most annoying thing about having a MIAB box but I've never found a way to stop it from happening. Honestly, I'd even take some way to stop it from ever sending me those emails again - I know what my DNS records are set to, I set them!
For me these reverse DNS changes stop a while ago... can't really tell when but when thinking about it now, it's been a whiiiiiile 😅
Oh it's not just reverse DNS tho.. .it's this domain not pointing to the IP - even tho it DOES point to the IP - then a day later it's fixed so it sends another email that it points to the IP but hey look! This other domain now doesn't point to the IP but the next day - it's a miracle! It points to the IP! It's amazingly frustrating
Maybe we should change the proofing code in status_checks.py:
L9 :
import dns.reversename, dns.resolver, socket
L406-L407 :
existing_rdns_v4 = socket.gethostbyaddr(env['PUBLIC_IP'])[0]
existing_rdns_v6 = socket.gethostbyaddr(env['PUBLIC_IPV6'])[0] if env.get("PUBLIC_IPV6") else None
Source: http://searchsignals.com/tutorials/reverse-dns-lookup/#pythonic-reverse-ip-lookups or with external dns (not our bind9/named) - failure still exist in focal/20.04 [ same lines - no need to import socket in L9 !! ]:
existing_rdns_v4 = query_dns(dns.reversename.from_address(env['PUBLIC_IP']), "PTR", '[Not Set]', at="8.8.8.8")
existing_rdns_v6 = query_dns(dns.reversename.from_address(env['PUBLIC_IPV6']), "PTR", '[Not Set]', at="8.8.8.8") if env.get("PUBLIC_IPV6") else None
I'm actually testing this 2 different methods on 2 different servers. But cuz it's a really rare misbehaviour and reproduce is almost impossible it could take some time..
What do you think about that @JoshData and @yodax - maybe native/builtin socket is the better approach here ?
best regards realizelol
I got an error while I was using python socket:
multiprocessing.pool.RemoteTraceback:
"""
Traceback (most recent call last):
File "/usr/lib/python3.8/multiprocessing/pool.py", line 125, in worker
result = (True, func(*args, **kwds))
File "/usr/lib/python3.8/multiprocessing/pool.py", line 51, in starmapstar
return list(itertools.starmap(args[0], args[1]))
File "management/status_checks.py", line 367, in run_domain_checks_on_domain
check_primary_hostname_dns(domain, env, output, dns_domains, dns_zonefiles)
File "management/status_checks.py", line 442, in check_primary_hostname_dns
existing_rdns_v6 = socket.gethostbyaddr(env['PUBLIC_IPV6'])[0] if env.get("PUBLIC_IPV6") else None
socket.herror: [Errno 2] Host name lookup failure
"""
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "management/status_checks.py", line 1016, in <module>
run_and_output_changes(env, pool)
File "management/status_checks.py", line 862, in run_and_output_changes
run_checks(True, env, cur, pool)
File "management/status_checks.py", line 77, in run_checks
run_domain_checks(rounded_values, env, output, pool)
File "management/status_checks.py", line 346, in run_domain_checks
ret = pool.starmap(run_domain_checks_on_domain, args, chunksize=1)
File "/usr/lib/python3.8/multiprocessing/pool.py", line 372, in starmap
return self._map_async(func, iterable, starmapstar, chunksize).get()
File "/usr/lib/python3.8/multiprocessing/pool.py", line 768, in get
raise self._value
socket.herror: [Errno 2] Host name lookup failure
But while I was adding mta-sts A and AAAA Entries it wouldn't be recognize on local server while "8.8.8.8" already get that a long time ago.
(host -t a mta-sts.domain.tld
+ host -t a mta.domain.tld 8.8.8.8
)
I also previously added dns forwarders to bind9 (I thought it should forward unknown domain names and ask "8.8.8.8" for it) but this wasn't working at all?!
/etc/bind/named.conf.options
(inside the options block)
forwarders {
8.8.8.8;
2001:4860:4860::8888;
};
After all I've done a DNS-update tools/dns_update
and the mta-sts entries have been recognized.
So I've actually doing this in daily_task.sh L11
# Do a dns + web update
[ -f /etc/cron.daily/mailinabox-dnssec ] && rm -f /etc/cron.daily/mailinabox-dnssec
tools/dns_update &>/dev/null
tools/web_update &>/dev/null
Note: This will delete /etc/cron.daily/mailinabox-dnssec (which was running on 6:25am)
cat /etc/crontab | grep "cron\.daily"
25 6 * * * root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.daily )
Another method would be to move it to cron.d with specific times. But I think running it in daily_tasks.sh should be fine. :p
best regards realizelol
This "error" started occurring regularly on my MiaB installation about a month ago, so I thought I'd invest some time on this rather annoying but innocuous issue. After a lot of small-scale experimentation in my own modified version of what the status_checks.py is doing, I discovered that only by running the rndc flush
command: (https://github.com/mail-in-a-box/mailinabox/blob/master/management/status_checks.py#L59), followed by the reverse lookups, was I was able to reproduce the error.
Sometimes it works fine with the flush and sometimes it does not. But when it failed for me, it was throwing the dns.resolver.NoNameservers
exception. Usually calling it a second time worked fine. It's possible (but I haven't tested it thoroughly) that increasing the resolver.timeout
to something greater than 5 seconds would fix this. I went a different route in my testing, which was to modify the two reverse DNS calls to override the resolver that's being used by adding at='8.8.8.8'
to each of those two calls:
https://github.com/mail-in-a-box/mailinabox/blob/master/management/status_checks.py#L444-L445
This seems to work, or at least I have seen no failures since I made this change.
Someone, please correct me if I'm wrong, but I believe this to still be a valid test, because ultimately we only want to know, "is reverse DNS setup correctly?" and using google's public DNS for that should suffice.
I'd really like to know, like everyone else, why flushing the cache triggers this error "sometimes". Busy network? Busy DNS servers? Busy MiaB server?
BTW, I'm running in Linode, Ubuntu 18.04, v0.46 (although I've been seeing the issue since v0.45).
It may not have anything to do with rndc flush
directly. If the DNS result is already cached, then the status check isn't necessarily checking anything. So we clear the cache to make sure we're getting current DNS info. The DNS error could be coming from any point on the path from the root nameservers down back to the box.
I agree that using a public DNS server for this test would be fine (in part because reverse DNS is not protected by DNSSEC, so we don't need to use a local DNSSEC-capable resolver as we do for other tests). But we don't seem to need it for any other test.
I think you're idea that it might be that the box itself is busy is an interesting one that we may not have considered before. Maybe right after clearing the cache either bind isn't ready or nsd receives too many requests at once.
I'm no longer using Mail-in-the-box. I can see that 2 years ago it was probably fixed, and it has gathered no more comments. I'm closing the issue. Feel free to reopen if it still happens.
Reverse DNS status varies on a daily basis, even DNS records are not changed. This causes unnecessary status emails. Box functionality itself seems to be just fine.
This is using DigitalOcean DNS. MIAB is running on Ubuntu 14.04 LTS box.