dougbtv / docker-asterisk

Some dockerfiles for whipping up an asterisk server
283 stars 135 forks source link

Dynamic IP address #35

Closed zburgermeiszter closed 8 years ago

zburgermeiszter commented 8 years ago

Hi Doug,

I'm trying to run it on Fedora 23, but the coreos boxes always get a random IP from the 192.168.122.x range.

My ansible version is 1.9.4

I hope you can help.

Kind regards, Zoltan

dougbtv commented 8 years ago

Sure thing -- so, that should be OK. Just make sure you update the inventory file to match the IP addresses for the hosts to match the DHCP assigned addresses, in the clone it's high-availability/ansible/inventory/coreos e.g. this file:

https://github.com/dougbtv/docker-asterisk/blob/master/high-availability/ansible/inventory/coreos

And then... Also update the variables to reflect that subnet, in this file high-availability/ansible/vars/coreos.yml -- which already uses a 192.168.122.0/24 subnet (I developed it in Fedora using libvirt, so, I believe that's default), it's this file:

https://github.com/dougbtv/docker-asterisk/blob/master/high-availability/ansible/vars/coreos.yml

But if you're using the cluster_creator.yml playbook, it should be templating the cloud configs for you, using this file: high-availability/ansible/roles/libvirt-coreos/templates/libvirt-cloud-config.j2 e.g.

https://github.com/dougbtv/docker-asterisk/blob/master/high-availability/ansible/roles/libvirt-coreos/templates/libvirt-cloud-config.j2

Which should set a static IP address in the subnet you set the in the variables, so it's setup to create 5 boxes, and for boxes numbered 1-5 it should assign them IPs like 192.168.122.101 - 192.168.122.105

Are you using the cluster_creator.yml playbook?

zburgermeiszter commented 8 years ago

Yes, I'm using the cluster_creator.yml playbook with the laptop_coreos inventory file. I dug myself into the configs, and I can see the libvirt-cloud-config.j2 template, and it should configure a static IP, but for some reason it get a dynamic IP.

I get few SELINUX AVC Denial messages while running the cluster creator, but I'm not sure what is the reason for it, so I uploaded a log for you. cat /var/log/audit/audit.log | grep deni

Thanks

dougbtv commented 8 years ago

Hrmmm, yeah I definitely have developed it with SELinux disabled. I wonder if the two are related? It's possible.

Unfortunately, I'm not too interested in debugging it with SELinux compatibility at the moment. If you are however -- I'd be more than happy to accept a PR that makes it SELinux friendly.

Any chance you'd want to try it again with SELinux disabled? I wondering if it might be an issue with the VM not having privileges enough to set it's IP statically? I'm unsure, tbh.

zburgermeiszter commented 8 years ago

As it is a VM it should not be affected by any host networking settings, because this 192.168.122 range only exists in the VM itself.

I turned off selinux for qemu with this

SVirt can be disabled for the libvirt QEMU driver by editting /etc/libvirt/qemu.conf, uncommenting and setting security_driver='none' Then restart libvirtd with service libvirtd restart

Now I can't see SELinux errors, but my core boxes still receiving static IPs...

I don't need selinux, as I just want to try it out.

dougbtv commented 8 years ago

OK, cool, that takes selinux out of the equation. Interesting, I wonder what is causing it to not assign the static IP...

For kicks, can you log into one of the CoreOS machines and cat the cloud-init file and see that it looks ok?

Good chance I can try it on Fedora 23 this weekend and see but on two machines with F21, I get static ips.

On Fri, Jan 8, 2016, 10:21 AM Zoltan Burgermeiszter < notifications@github.com> wrote:

As it is a VM it should not be affected by any host networking settings, because this 192.168.122 range only exists in the VM itself.

I turned off selinux for qemu with this https://fedoraproject.org/wiki/How_to_debug_Virtualization_problems

SVirt can be disabled for the libvirt QEMU driver by editting /etc/libvirt/qemu.conf, uncommenting and setting security_driver='none' Then restart libvirtd with service libvirtd restart

Now I can't see SELinux errors, but my core boxes still receiving static IPs...

— Reply to this email directly or view it on GitHub https://github.com/dougbtv/docker-asterisk/issues/35#issuecomment-170029857 .

zburgermeiszter commented 8 years ago

I can connect to the dynamic ips of the boxes with ssh, but can not log in as it is asking for a password which does not exists as it should authenticate with the SSH key. I configured my ssh key in the coreos.yml.

If it works for you on F21, then I'll try it with that. The Fedora runs in a VMWare environment, but I don't think that effects this IP issue.

Thanks, Zoltan

dougbtv commented 8 years ago

Should be totally OK within vmware, I think. Hrmmm, with sshing in, you should be able to do.... "ssh core@192.168.122.X" and it should use your local keys, otherwise, specify the key with "ssh -i .ssh/path/to/ssh/key core@192.168.122.X"

On Fri, Jan 8, 2016 at 11:02 AM Zoltan Burgermeiszter < notifications@github.com> wrote:

I can connect to the dynamic ips of the boxes with ssh, but can not log in as it is asking for a password which does not exists as it should authenticate with the SSH key. I configured my ssh key in the coreos.yml.

If it works for you on F21, then I'll try it with that. The Fedora runs in a VMWare environment, but I don't think that effects this IP issue.

Thanks, Zoltan

— Reply to this email directly or view it on GitHub https://github.com/dougbtv/docker-asterisk/issues/35#issuecomment-170040028 .

zburgermeiszter commented 8 years ago

OK, I think I have found something.

I managed to SSH in to the boxes with core@192.168.122.x I was a bit confused, because when I opened the bow from the QEMU manager it showed the dynamic IP above the login prompt. I think the core guest first boot up with a dynamic IP and the configure the static IP for itself.

So in the following file: docker-asterisk/high-availability/ansible/roles/libvirt-coreos/tasks/libvirt_loaduserdata.yml:13

libvirt_userdatadir: "/var/lib/libvirt/images/coreos/coreos{{ boxnumber }}/openstack/latest"

On the Fedora host: cat /var/lib/libvirt/images/coreos/coreos0/openstack/latest/user_data

The correct 192.168.122.200 is configured everywhere.

And now I can SSH to the correct IPs. Aaaaand, the original dynamic IP also works.

So I done some more investigation. On Fedora host: nmap -sP 192.168.122.1/24

This shows the random IPs and the correct static IPs from the range of 200-205.

So now I can SSH in to for example to the core0 box with both the 192.168.122.106 and 192.168.122.200 But even if I SSH into the .200 IP, when I run the ifconfig command, it says the following: eth0: inet 192.168.122.106 netmask 255.255.255.0 broadcast 192.168.122.255

Weird.

So maybe I was misinformed, because initially, when I got the SELinux errors, I was unable to SSH into the boxes via the 200-205 range, and when I opened the box from the QEMU Manager it showed the dynamic IP above the login prompt, and then after I disabled SELinux, it still showed a dynamic IP. But now the 200-205 seems to be fine. :)


Now the bootstrap_ansible_coreos.yml ran without any issues :) (However it would be good to remove the core boxes from the known_hosts file on cluster destroy)

Thanks for your help so far. The source of the issues was the enabled SELinux. (or the known_hosts file.) Thanks anyway.

dougbtv commented 8 years ago

Nice find!! Cool, happy to help, getting CoreOS up and running and etcd synchronized are the hardest parts to me.... I am hopeful the next plays will be smoother.

On Fri, Jan 8, 2016, 5:48 PM Zoltan Burgermeiszter notifications@github.com wrote:

OK, I think I have found something.

I managed to SSH in to the boxes with core@192.168.122.x I was a bit confused, because when I opened the bow from the QEMU manager it showed the dynamic IP above the login prompt. I think the core guest first boot up with a dynamic IP and the configure the static IP for itself.

So in the following file:

docker-asterisk/high-availability/ansible/roles/libvirt-coreos/tasks/libvirt_loaduserdata.yml:13 https://github.com/dougbtv/docker-asterisk/blob/master/high-availability/ansible/roles/libvirt-coreos/tasks/libvirt_loaduserdata.yml#L13

libvirt_userdatadir: "/var/lib/libvirt/images/coreos/coreos{{ boxnumber }}/openstack/latest"

On the Fedora host: cat /var/lib/libvirt/images/coreos/coreos0/openstack/latest/user_data https://github.com/dougbtv/docker-asterisk/files/83216/coreos0-user_data.txt

The correct 192.168.122.200 is configured everywhere.

And now I can SSH to the correct IPs. Aaaaand, the original dynamic IP also works.

So I done some more investigation. On Fedora host: nmap -sP 192.168.122.1/24 https://github.com/dougbtv/docker-asterisk/files/83230/fedora-nmap.txt

This shows the random IPs and the correct static IPs from the range of 200-205.

So now I can SSH in to for example to the core0 box with both the 192.168.122.106 and 192.168.122.200 But even if I SSH into the .200 IP, when I run the ifconfig command, it says the following: eth0: inet 192.168.122.106 netmask 255.255.255.0 broadcast 192.168.122.255

Weird.

So maybe I was misinformed, because initially, when I got the SELinux errors, I was unable to SSH into the boxes via the 200-205 range, and when I opened the box from the QEMU Manager it showed the dynamic IP above the login prompt, and then after I disabled SELinux, it still showed a dynamic IP.

But now the 200-205 seems to be fine. :)

Now the bootstrap_ansible_coreos.yml ran without any issues :) (However it would be good to remove the core boxes from the known_hosts file on cluster destroy)

Thanks for your help so far. The source of the issues was the enabled SELinux. (or the known_hosts file.) Thanks anyway.

— Reply to this email directly or view it on GitHub https://github.com/dougbtv/docker-asterisk/issues/35#issuecomment-170149055 .

zburgermeiszter commented 8 years ago

Thanks for all your help.

dougbtv commented 8 years ago

No prob zoltan, thanks for closing er out.

On Wed, Jan 13, 2016, 7:15 AM Zoltan Burgermeiszter < notifications@github.com> wrote:

Thanks for all your help.

— Reply to this email directly or view it on GitHub https://github.com/dougbtv/docker-asterisk/issues/35#issuecomment-171274378 .