Closed ctron closed 4 years ago
I am not sure about the issue here. We create a CRC specific birdge that should have no influence on your environment, so can you explain more what 'Boom!' actually is?
Can you provide some output from the docker build
?
Also, what is the platform you run this on and where does 'docker' come from (centos 7)?
I am not sure about the issue here. We create a CRC specific birdge that should have no influence on your environment, so can you explain more what 'Boom!' actually is?
I think I described the issue, please let me know what you are missing:
Expected behavior:
docker build
has network access.Actual behavior:
docker build
fails due to network issues.
An output from the build would be e.g. curl failing to download a file.
Also, what is the platform you run this on and where does 'docker' come from (centos 7)?
Again, I think I posted that already. Please let me know what you are missing:
System:
Linux XXX 3.10.0-957.10.1.el7.x86_64 #1 SMP Thu Feb 7 07:12:53 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
Docker:
$ docker version Client: Version: 1.13.1 API version: 1.26 Package version: docker-1.13.1-96.gitb2f74b2.el7.x86_64 Go version: go1.10.8 Git commit: b2f74b2/1.13.1 Built: Tue Apr 2 21:01:07 2019 OS/Arch: linux/amd64 Server: Version: 1.13.1 API version: 1.26 (minimum version 1.12) Package version: docker-1.13.1-96.gitb2f74b2.el7.x86_64 Go version: go1.10.8 Git commit: b2f74b2/1.13.1 Built: Tue Apr 2 21:01:07 2019 OS/Arch: linux/amd64 Experimental: false
Docker itself comes from RHEL:
docker-1.13.1-96.gitb2f74b2.el7.x86_64
docker-client-1.13.1-96.gitb2f74b2.el7.x86_64
docker-common-1.13.1-96.gitb2f74b2.el7.x86_64
Are you using this from the Red Hat network?
problem is when you use dnsmasq with NM then you are actually putting nameserver 127.0.0.1
only entry to /etc/resolv.conf
now if you create any container on this host (docker or podman) it will first check what is the entry on the /etc/resolve.conf
if it is has 127.0.0.1
then it put 8.8.8.8
in the container’s /etc/resolv.conf
which doesn’t have network connectivity if a user is using a corporate network because those networks often block other nameservers.
Are you using this from the Red Hat network?
Yes, I am running this being attached to a local Red Hat LAN. No VPN.
problem is when you use dnsmasq with NM then you are actually putting
nameserver 127.0.0.1
only entry to/etc/resolv.conf
now if you create any container on this host (docker or podman) it will first check what is the entry on the/etc/resolve.conf
if it is has127.0.0.1
then it put8.8.8.8
in the container’s/etc/resolv.conf
which doesn’t have network connectivity if a user is using a corporate network because those networks often block other nameservers.
I am not sure what you mean by the last paragraph. My expectation towards CRC would be, that it works inside the Red Hat network, like minikube and minishift work right now, inside the Red Hat network.
CRC would work, but you filed an issue related to docker build
failing.
And this is due to the fact how 'docker' interprets a 127.0.0.1 entry in
/etc/resolv.conf. This is not something we can handle here.... but would
have to be filed against the upstream.
Note: since NetworkManager with the installer and CRC uses
dnsmasq=true
it will use 127.0.0.1
as first entry in /etc/resolv.conf
.
It is Docker that replaces this with 8.8.8.8
. Since you are on the RH
network, this nameserver is blocked. See: https://github.com/moby/moby/issues/6388
A possible solution is to provide an override for the DNS: https://askubuntu.com/questions/475764/docker-io-dns-doesnt-work-its-trying-to-use-8-8-8-8
$ sudo vi /etc/docker/daemon.json
{
"dns": ["10.0.0.1", "10.0.0.2"]
}
Note: replace the values with the nameservers as allowed by your network.
Closing this issue as this is unrelated to CRC.
docker and podman apparently have a --dns option which can be used as well https://docs.docker.com/v17.09/engine/userguide/networking/default_network/configure-dns/
This is all fine, but I still think that CRC should not break an working docker installation. And while the problem manifests in Docker, installing CRC was the cause for this problem. So I would guess it is a CRC issue.
@ctron Should a warning at the end of crc start
(link to a document) should be good enough since from crc we can't fix it and we are bound to use dnsmasq.
@praveenkumar That would at least be helpful. Any maybe a crc stop
should revert to the original state of the host system.
I faced with this issue too. It would be nice for development flow to have crc and docker working both and in the same time on the working laptop.
@AndrienkoAleksandr yes but it is not limitation of CRC, if you run your own local dns server you will face the same. Please check https://github.com/containers/libpod/issues/4508 and https://github.com/moby/moby/issues/23910 , what we can do is document it and put a note for user.
Any maybe a crc stop should revert to the original state of the host system.
@ctron we are trying to implement crc cleanup
command to revert all the system changes.
we are trying to implement
crc cleanup
command to revert all the system changes.
This is because the setting is changed using crc setup
and not crc start
(as mentioned above). Therefore removing this setting with crc stop
would be wrong as that would need a re-run of the setup.
I believe this should also be made more clearly during the crc setup
steps... and perhaps with detecting if docker is installed.
Note: moving this to crc start
and crc stop
is a bad idea, as that would imply admin/root privileges would be needed to make this change.
@AndrienkoAleksandr yes but it is not limitation of CRC, if you run your own local dns server you will face the same. Please check containers/libpod#4508 and moby/moby#23910 , what we can do is document it and put a note for user.
But why doesn't minishift
suffer from this issue?
But why doesn't minishift suffer from this issue?
@ctron minishift never used the dnsmasq, this is more like a requirement from openshift4 side where we neeed to create the cluster ourself before bundle it and the cluster have to use a vaild domain name instead of IP which can't changed at later stage.
Minishift relies on external DNS provided by a service called xip.io/nip.io that just resolves to the IP address that is part of the actual FQDN. This means 'any' DNS would work that can resolve this domain, with the caveat that the IP address can never change during the lifetime of the cluster (restarts included). However, for CRC we use a local DNS that resolves api.crc.testing to a local IP address. The IP dddress is allowed to change, as long as the local DNS has the right IP address. We use dnsmasq for this since others had issues binding on port 53 when libvirt was using this already (but the sockert reuse does not work with other servers).
If this is just for a single DNS name, then why not use the "hosts" file?
@ctron it is not for single DNS name, otherwise we could've used the host
file. Host file doesn't provide wildcard entry for domain name which is required in our case.
@praveenkumar Yes indeed. From @gbraad it looked to me as if you only have an issue with a single hostname:
However, for CRC we use a local DNS that resolves api.crc.testing to a local IP address.
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
This issue is obvious from this specific hostname, but all in crc.testing znd apps-crc.testing are affected.
Closing with wontfix. Crc cleanup is another issue
Last steps:
crc
according to https://code-ready.github.io/crc/docker build
Expected behavior:
docker build
has network access.Actual behavior:
docker build
fails due to network issues.Workaround:
crc
in the file name from/etc/NetworkManager
sudo systemctl reload NetworkManager
sudo systemctl restart docker
System:
Docker: