bilelmsekni / OpenStack-Folsom-Install-guide

A full installation guide for OpenStack Folsom with Quantum
161 stars 136 forks source link

no IP on VM, Can't reach the metadata server, can't ping & ssh #32

Closed enarciso closed 11 years ago

enarciso commented 11 years ago

Great document! But I'm having trouble towards the end and would appreciate if you can provide some insights.

I followed the VLAN/2NICs branch but I'm unable to get an IP allocated to my VM (50.50.1.0/24). VM instance console shows that the instance cannot reach the metadata and followed your recommendations from #22 and confirmed that from my network node I can reach the metadata server.

Below are a few config/outputs from my nodes; appreciate all the help. Thank you

Controller Node

root@controller:~# route -n
 Kernel IP routing table
 Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
 0.0.0.0         10.46.200.1     0.0.0.0         UG    100    0        0 eth0
 10.46.200.0     0.0.0.0         255.255.254.0   U     0      0        0 eth0
 50.50.1.0       10.46.200.1     255.255.255.0   UG    0      0        0 eth0
 100.10.10.0     0.0.0.0         255.255.255.0   U     0      0        0 eth1
root@controller:~# ovs-vsctl show
304d97ba-a9b2-499e-b9f0-afefa1eb17a5
    ovs_version: "1.4.0+build0"
root@controller:~# grep -v "#" /etc/quantum/api-paste.ini
[composite:quantum]
/: quantumversions
/v2.0: quantumapi_v2_0

[composite:quantumapi_v2_0]
use = call:quantum.auth:pipeline_factory
noauth = extensions quantumapiapp_v2_0
keystone = authtoken keystonecontext extensions quantumapiapp_v2_0

[filter:keystonecontext]
paste.filter_factory = quantum.auth:QuantumKeystoneContext.factory

[filter:authtoken]
paste.filter_factory = keystone.middleware.auth_token:filter_factory
auth_host = 100.10.10.51
auth_port = 35357
auth_protocol = http
admin_tenant_name = service
admin_user = quantum
admin_password = openstack

[filter:extensions]
paste.filter_factory = quantum.extensions.extensions:plugin_aware_extension_middleware_factory

[app:quantumversions]
paste.app_factory = quantum.api.versions:Versions.factory

[app:quantumapiapp_v2_0]
paste.app_factory = quantum.api.v2.router:APIRouter.factory

Network Node

root@Network:~# route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         10.46.200.1     0.0.0.0         UG    100    0        0 br-ex
10.46.200.0     0.0.0.0         255.255.254.0   U     0      0        0 br-ex
100.10.10.0     0.0.0.0         255.255.255.0   U     0      0        0 br-eth1
root@Network:~# ovs-vsctl show
bc8dc4db-e84d-4df9-885d-66547c9411f8
    Bridge br-ex
        Port "qg-9791a7f9-cd"
            Interface "qg-9791a7f9-cd"
                type: internal
        Port br-ex
            Interface br-ex
                type: internal
        Port "eth0"
            Interface "eth0"
    Bridge br-int
        Port "tap0bbc49ed-d6"
            tag: 1
            Interface "tap0bbc49ed-d6"
                type: internal
        Port "qr-e736f376-63"
            tag: 1
            Interface "qr-e736f376-63"
                type: internal
        Port "int-br-eth1"
            Interface "int-br-eth1"
        Port br-int
            Interface br-int
                type: internal
    Bridge "br-eth1"
        Port "phy-br-eth1"
            Interface "phy-br-eth1"
        Port "br-eth1"
            Interface "br-eth1"
                type: internal
        Port "eth1"
            Interface "eth1"
    ovs_version: "1.4.0+build0"
root@Network:~# grep -v "#" /etc/quantum/l3_agent.ini
[DEFAULT]

interface_driver = quantum.agent.linux.interface.OVSInterfaceDriver

auth_url = http://100.10.10.51:35357/v2.0
auth_region = RegionOne
admin_tenant_name = service
admin_user = quantum
admin_password = openstack
metadata_ip = 10.46.200.193
metadata_port = 8775

root_helper = sudo /usr/bin/quantum-rootwrap /etc/quantum/rootwrap.conf

Compute Node

root@compute:~# route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         10.46.200.1     0.0.0.0         UG    100    0        0 eth0
10.46.200.0     0.0.0.0         255.255.254.0   U     0      0        0 eth0
100.10.10.0     0.0.0.0         255.255.255.0   U     0      0        0 br-eth1
root@compute:~# ovs-vsctl show
fb3124bd-56fb-46ea-ae56-60d3f7af112a
    Bridge br-int
        Port br-int
            Interface br-int
                type: internal
        Port "qvo77fc2e6f-35"
            tag: 1
            Interface "qvo77fc2e6f-35"
        Port "int-br-eth1"
            Interface "int-br-eth1"
        Port "qvo6e4b4d01-18"
            tag: 1
            Interface "qvo6e4b4d01-18"
        Port "qvo3650512e-f8"
            tag: 1
            Interface "qvo3650512e-f8"
    Bridge "br-eth1"
        Port "br-eth1"
            Interface "br-eth1"
                type: internal
        Port "eth1"
            Interface "eth1"
        Port "phy-br-eth1"
            Interface "phy-br-eth1"
    ovs_version: "1.4.0+build0"

Checking metadata support:

root@network:/etc/quantum# ping 10.46.200.193
PING 10.46.200.193 (10.46.200.193) 56(84) bytes of data.
64 bytes from 10.46.200.193: icmp_req=1 ttl=64 time=0.263 ms
^C
--- 10.46.200.193 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.263/0.263/0.263/0.000 ms

root@network:/etc/quantum# ip netns exec qrouter-91ad119b-83ad-41b9-91ac-04248f943fb6 ping 10.46.200.193
PING 10.46.200.193 (10.46.200.193) 56(84) bytes of data.
64 bytes from 10.46.200.193: icmp_req=1 ttl=64 time=11.3 ms
64 bytes from 10.46.200.193: icmp_req=2 ttl=64 time=0.244 ms
pythoner commented 11 years ago

I encountered the same issue. The followings are my route table on controller node.

root@controller:/var/log/quantum# route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 0.0.0.0 172.16.35.1 0.0.0.0 UG 100 0 0 eth1 50.50.1.0 172.16.35.100 255.255.255.0 UG 0 0 0 eth1 100.10.10.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0 169.254.0.0 0.0.0.0 255.255.0.0 U 1000 0 0 eth1 172.16.35.0 0.0.0.0 255.255.255.0 U 0 0 0 eth1

pythoner commented 11 years ago

Can ping vm now, but still can not ssh. @enarciso, what's the IP of your vm?

pythoner commented 11 years ago

The followings are console log of intance.

[ 0.000000] Initializing cgroup subsys cpuset [ 0.000000] Initializing cgroup subsys cpu [ 0.000000] Linux version 3.0.0-12-virtual (buildd@crested) (gcc version 4.6.1 (Ubuntu/Linaro 4.6.1-9ubuntu3) ) #20-Ubuntu SMP Fri Oct 7 18:19:02 UTC 2011 (Ubuntu 3.0.0-12.20-virtual 3.0.4) [ 0.000000] Command line: LABEL=cirros-rootfs ro console=tty0 console=ttyS0 console=hvc0 [ 0.000000] KERNEL supported cpus: [ 0.000000] Intel GenuineIntel [ 0.000000] AMD AuthenticAMD [ 0.000000] Centaur CentaurHauls [ 0.000000] BIOS-provided physical RAM map: [ 0.000000] BIOS-e820: 0000000000000000 - 000000000009dc00 (usable) [ 0.000000] BIOS-e820: 000000000009dc00 - 00000000000a0000 (reserved) [ 0.000000] BIOS-e820: 00000000000f0000 - 0000000000100000 (reserved) [ 0.000000] BIOS-e820: 0000000000100000 - 000000001fffd000 (usable) [ 0.000000] BIOS-e820: 000000001fffd000 - 0000000020000000 (reserved) [ 0.000000] BIOS-e820: 00000000feffc000 - 00000000ff000000 (reserved) [ 0.000000] BIOS-e820: 00000000fffc0000 - 0000000100000000 (reserved) [ 0.000000] NX (Execute Disable) protection: active [ 0.000000] DMI 2.4 present. [ 0.000000] No AGP bridge found [ 0.000000] last_pfn = 0x1fffd max_arch_pfn = 0x400000000 [ 0.000000] x86 PAT enabled: cpu 0, old 0x70406, new 0x7010600070106 [ 0.000000] found SMP MP-table at [ffff8800000fdae0] fdae0 [ 0.000000] init_memory_mapping: 0000000000000000-000000001fffd000 [ 0.000000] RAMDISK: 1fdf9000 - 1ffed000 [ 0.000000] ACPI: RSDP 00000000000fd980 00014 (v00 BOCHS ) [ 0.000000] ACPI: RSDT 000000001fffd7b0 00034 (v01 BOCHS BXPCRSDT 00000001 BXPC 00000001) [ 0.000000] ACPI: FACP 000000001fffff80 00074 (v01 BOCHS BXPCFACP 00000001 BXPC 00000001) [ 0.000000] ACPI: DSDT 000000001fffd9b0 02589 (v01 BXPC BXDSDT 00000001 INTL 20100528) [ 0.000000] ACPI: FACS 000000001fffff40 00040 [ 0.000000] ACPI: SSDT 000000001fffd910 0009E (v01 BOCHS BXPCSSDT 00000001 BXPC 00000001) [ 0.000000] ACPI: APIC 000000001fffd830 00072 (v01 BOCHS BXPCAPIC 00000001 BXPC 00000001) [ 0.000000] ACPI: HPET 000000001fffd7f0 00038 (v01 BOCHS BXPCHPET 00000001 BXPC 00000001) [ 0.000000] No NUMA configuration found [ 0.000000] Faking a node at 0000000000000000-000000001fffd000 [ 0.000000] Initmem setup node 0 0000000000000000-000000001fffd000 [ 0.000000] NODE_DATA [000000001fff5000 - 000000001fff9fff] [ 0.000000] kvm-clock: Using msrs 4b564d01 and 4b564d00 [ 0.000000] kvm-clock: cpu 0, msr 0:1ce28c1, boot clock [ 0.000000] Zone PFN ranges: [ 0.000000] DMA 0x00000010 -> 0x00001000 [ 0.000000] DMA32 0x00001000 -> 0x00100000 [ 0.000000] Normal empty [ 0.000000] Movable zone start PFN for each node [ 0.000000] early_node_map[2] active PFN ranges [ 0.000000] 0: 0x00000010 -> 0x0000009d [ 0.000000] 0: 0x00000100 -> 0x0001fffd [ 0.000000] ACPI: PM-Timer IO Port: 0xb008 [ 0.000000] ACPI: LAPIC (acpi_id[0x00] lapic_id[0x00] enabled) [ 0.000000] ACPI: IOAPIC (id[0x01] address[0xfec00000] gsi_base[0]) [ 0.000000] IOAPIC[0]: apic_id 1, version 17, address 0xfec00000, GSI 0-23 [ 0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) [ 0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) [ 0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) [ 0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) [ 0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) [ 0.000000] Using ACPI (MADT) for SMP configuration information [ 0.000000] ACPI: HPET id: 0x8086a201 base: 0xfed00000 [ 0.000000] SMP: Allowing 1 CPUs, 0 hotplug CPUs [ 0.000000] PM: Registered nosave memory: 000000000009d000 - 000000000009e000 [ 0.000000] PM: Registered nosave memory: 000000000009e000 - 00000000000a0000 [ 0.000000] PM: Registered nosave memory: 00000000000a0000 - 00000000000f0000 [ 0.000000] PM: Registered nosave memory: 00000000000f0000 - 0000000000100000 [ 0.000000] Allocating PCI resources starting at 20000000 (gap: 20000000:deffc000) [ 0.000000] Booting paravirtualized kernel on KVM [ 0.000000] setup_percpu: NR_CPUS:64 nr_cpumask_bits:64 nr_cpu_ids:1 nr_node_ids:1 [ 0.000000] PERCPU: Embedded 27 pages/cpu @ffff88001fa00000 s79296 r8192 d23104 u2097152 [ 0.000000] kvm-clock: cpu 0, msr 0:1fa128c1, primary cpu clock [ 0.000000] KVM setup async PF for cpu 0 [ 0.000000] Built 1 zonelists in Node order, mobility grouping on. Total pages: 129157 [ 0.000000] Policy zone: DMA32 [ 0.000000] Kernel command line: LABEL=cirros-rootfs ro console=tty0 console=ttyS0 console=hvc0 [ 0.000000] PID hash table entries: 2048 (order: 2, 16384 bytes) [ 0.000000] Checking aperture... [ 0.000000] No AGP bridge found [ 0.000000] Memory: 497852k/524276k available (6206k kernel code, 460k absent, 25964k reserved, 6907k data, 900k init) [ 0.000000] SLUB: Genslabs=15, HWalign=64, Order=0-3, MinObjects=0, CPUs=1, Nodes=1 [ 0.000000] Hierarchical RCU implementation. [ 0.000000] RCU dyntick-idle grace-period acceleration is enabled. [ 0.000000] NR_IRQS:4352 nr_irqs:256 16 [ 0.000000] Console: colour VGA+ 80x25 [ 0.000000] console [tty0] enabled [ 0.000000] console [ttyS0] enabled [ 0.000000] allocated 4194304 bytes of page_cgroup [ 0.000000] please try 'cgroup_disable=memory' option if you don't want memory cgroups [ 0.000000] Detected 2992.480 MHz processor. [ 0.008000] Calibrating delay loop (skipped) preset value.. 5984.96 BogoMIPS (lpj=11969920) [ 0.008000] pid_max: default: 32768 minimum: 301 [ 0.008060] Security Framework initialized [ 0.009058] AppArmor: AppArmor initialized [ 0.009988] Yama: becoming mindful. [ 0.010932] Dentry cache hash table entries: 65536 (order: 7, 524288 bytes) [ 0.012585] Inode-cache hash table entries: 32768 (order: 6, 262144 bytes) [ 0.014253] Mount-cache hash table entries: 256 [ 0.016021] Initializing cgroup subsys cpuacct [ 0.017050] Initializing cgroup subsys memory [ 0.018048] Initializing cgroup subsys devices [ 0.019043] Initializing cgroup subsys freezer [ 0.020006] Initializing cgroup subsys net_cls [ 0.021020] Initializing cgroup subsys blkio [ 0.021992] Initializing cgroup subsys perf_event [ 0.023108] CPU: CPU feature xsave disabled, no CPUID level 0xd [ 0.024012] mce: CPU supports 10 MCE banks [ 0.025168] SMP alternatives: switching to UP code [ 0.108140] Freeing SMP alternatives: 24k freed [ 0.109281] ACPI: Core revision 20110413 [ 0.111125] ftrace: allocating 26075 entries in 103 pages [ 0.117285] ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 [ 0.120009] CPU0: Intel(R) Core(TM)2 Duo CPU T7700 @ 2.40GHz stepping 0b [ 0.124007] Performance Events: unsupported p6 CPU model 15 no PMU driver, software events only. [ 0.124007] Brought up 1 CPUs [ 0.124007] Total of 1 processors activated (5984.96 BogoMIPS). [ 0.124458] devtmpfs: initialized [ 0.127128] print_constraints: dummy: [ 0.128078] Time: 5:43:17 Date: 01/07/13 [ 0.129054] NET: Registered protocol family 16 [ 0.130147] ACPI: bus type pci registered [ 0.132020] PCI: Using configuration type 1 for base access [ 0.133884] bio: create slab at 0 [ 0.136754] ACPI: Interpreter enabled [ 0.137620] ACPI: (supports S0 S3 S4 S5) [ 0.138859] ACPI: Using IOAPIC for interrupt routing [ 0.143576] ACPI: No dock devices found. [ 0.144015] HEST: Table not found. [ 0.144817] PCI: Ignoring host bridge windows from ACPI; if necessary, use "pci=use_crs" and report a bug [ 0.146811] ACPI: PCI Root Bridge [PCI0](domain 0000 [bus 00-ff]) [ 0.150634] pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI [ 0.152024] pci 0000:00:01.3: quirk: [io 0xb100-0xb10f] claimed by PIIX4 SMB [ 0.165243] pci0000:00: Unable to request _OSC control (_OSC support mask: 0x1e) [ 0.170261] ACPI: PCI Interrupt Link [LNKA](IRQs 5 10 11) [ 0.171934] ACPI: PCI Interrupt Link [LNKB](IRQs 5 10 11) [ 0.172856] ACPI: PCI Interrupt Link [LNKC](IRQs 5 10 11) [ 0.174509] ACPI: PCI Interrupt Link [LNKD](IRQs 5 10 11) [ 0.176869] ACPI: PCI Interrupt Link [LNKS](IRQs 9) *0 [ 0.178612] vgaarb: device added: PCI:0000:00:02.0,decodes=io+mem,owns=io+mem,locks=none [ 0.180017] vgaarb: loaded [ 0.180710] vgaarb: bridge control possible 0000:00:02.0 [ 0.182119] SCSI subsystem initialized [ 0.183271] usbcore: registered new interface driver usbfs [ 0.184035] usbcore: registered new interface driver hub [ 0.185207] usbcore: registered new device driver usb [ 0.186473] PCI: Using ACPI for IRQ routing [ 0.188401] NetLabel: Initializing [ 0.189209] NetLabel: domain hash size = 128 [ 0.190162] NetLabel: protocols = UNLABELED CIPSOv4 [ 0.191221] NetLabel: unlabeled traffic allowed by default [ 0.192097] HPET: 3 timers in total, 0 timers will be used for per-cpu timer [ 0.193524] hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 [ 0.194894] hpet0: 3 comparators, 64-bit 100.000000 MHz counter [ 0.200132] Switching to clocksource kvm-clock [ 0.202939] Switched to NOHz mode on CPU #0 [ 0.210310] AppArmor: AppArmor Filesystem Enabled [ 0.211398] pnp: PnP ACPI init [ 0.212194] ACPI: bus type pnp registered [ 0.213867] pnp: PnP ACPI: found 8 devices [ 0.214789] ACPI: ACPI bus type pnp unregistered [ 0.221946] NET: Registered protocol family 2 [ 0.223144] IP route cache hash table entries: 4096 (order: 3, 32768 bytes) [ 0.224822] TCP established hash table entries: 16384 (order: 6, 262144 bytes) [ 0.226692] TCP bind hash table entries: 16384 (order: 6, 262144 bytes) [ 0.228320] TCP: Hash tables configured (established 16384 bind 16384) [ 0.229614] TCP reno registered [ 0.230378] UDP hash table entries: 256 (order: 1, 8192 bytes) [ 0.231597] UDP-Lite hash table entries: 256 (order: 1, 8192 bytes) [ 0.232995] NET: Registered protocol family 1 [ 0.233963] pci 0000:00:00.0: Limiting direct PCI/PCI transfers [ 0.235189] pci 0000:00:01.0: PIIX3: Enabling Passive Release [ 0.236396] pci 0000:00:01.0: Activating ISA DMA hang workarounds [ 0.238040] audit: initializing netlink socket (disabled) [ 0.239180] type=2000 audit(1357537398.236:1): initialized [ 0.256691] Trying to unpack rootfs image as initramfs... [ 0.269397] HugeTLB registered 2 MB page size, pre-allocated 0 pages [ 0.277484] VFS: Disk quotas dquot_6.5.2 [ 0.278468] Dquot-cache hash table entries: 512 (order 0, 4096 bytes) [ 0.280312] fuse init (API version 7.16) [ 0.281277] msgmni has been set to 972 [ 0.296168] Block layer SCSI generic (bsg) driver version 0.4 loaded (major 253) [ 0.297871] io scheduler noop registered [ 0.298753] io scheduler deadline registered (default) [ 0.300658] io scheduler cfq registered [ 0.301637] pci_hotplug: PCI Hot Plug PCI Core version: 0.5 [ 0.302812] pciehp: PCI Express Hot Plug Controller Driver version: 0.4 [ 0.304304] input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0 [ 0.305930] ACPI: Power Button [PWRF] [ 0.312896] ERST: Table is not found! [ 0.313943] ACPI: PCI Interrupt Link [LNKC] enabled at IRQ 11 [ 0.315154] virtio-pci 0000:00:03.0: PCI INT A -> Link[LNKC] -> GSI 11 (level, high) -> IRQ 11 [ 0.324331] ACPI: PCI Interrupt Link [LNKD] enabled at IRQ 10 [ 0.325535] virtio-pci 0000:00:04.0: PCI INT A -> Link[LNKD] -> GSI 10 (level, high) -> IRQ 10 [ 0.327609] ACPI: PCI Interrupt Link [LNKA] enabled at IRQ 10 [ 0.328814] virtio-pci 0000:00:05.0: PCI INT A -> Link[LNKA] -> GSI 10 (level, high) -> IRQ 10 [ 0.332570] Freeing initrd memory: 2000k freed [ 0.334271] ACPI: PCI Interrupt Link [LNKB] enabled at IRQ 11 [ 0.335501] virtio-pci 0000:00:06.0: PCI INT A -> Link[LNKB] -> GSI 11 (level, high) -> IRQ 11 [ 0.337590] Serial: 8250/16550 driver, 32 ports, IRQ sharing enabled [ 0.360842] serial8250: ttyS0 at I/O 0x3f8 (irq = 4) is a 16550A [ 0.383742] serial8250: ttyS1 at I/O 0x2f8 (irq = 3) is a 16550A [ 0.407454] 00:05: ttyS0 at I/O 0x3f8 (irq = 4) is a 16550A [ 0.430325] 00:06: ttyS1 at I/O 0x2f8 (irq = 3) is a 16550A [ 0.431756] Linux agpgart interface v0.103 [ 0.433766] brd: module loaded [ 0.435018] loop: module loaded [ 0.464702] vda: vda1 [ 0.466131] scsi0 : ata_piix [ 0.466963] scsi1 : ata_piix [ 0.467731] ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc0c0 irq 14 [ 0.469096] ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc0c8 irq 15 [ 0.470660] Fixed MDIO Bus: probed [ 0.471490] PPP generic driver version 2.4.2 [ 0.472501] tun: Universal TUN/TAP device driver, 1.6 [ 0.473560] tun: (C) 1999-2004 Max Krasnyansky maxk@qualcomm.com [ 0.583574] ehci_hcd: USB 2.0 'Enhanced' Host Controller (EHCI) Driver [ 0.584945] ohci_hcd: USB 1.1 'Open' Host Controller (OHCI) Driver [ 0.586190] uhci_hcd: USB Universal Host Controller Interface driver [ 0.587504] uhci_hcd 0000:00:01.2: PCI INT D -> Link[LNKD] -> GSI 10 (level, high) -> IRQ 10 [ 0.589334] uhci_hcd 0000:00:01.2: UHCI Host Controller [ 0.590479] uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 [ 0.593234] uhci_hcd 0000:00:01.2: irq 10, io base 0x0000c040 [ 0.594617] hub 1-0:1.0: USB hub found [ 0.595550] hub 1-0:1.0: 2 ports detected [ 0.596607] i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 [ 0.599083] serio: i8042 KBD port at 0x60,0x64 irq 1 [ 0.600212] serio: i8042 AUX port at 0x60,0x64 irq 12 [ 0.601402] mousedev: PS/2 mouse device common for all mice [ 0.602738] rtc_cmos 00:01: RTC can wake from S4 [ 0.604050] rtc_cmos 00:01: rtc core: registered rtc_cmos as rtc0 [ 0.605425] rtc0: alarms up to one day, 114 bytes nvram, hpet irqs [ 0.606875] device-mapper: uevent: version 1.0.3 [ 0.608040] device-mapper: ioctl: 4.20.0-ioctl (2011-02-02) initialised: dm-devel@redhat.com [ 0.610364] cpuidle: using governor ladder [ 0.611381] cpuidle: using governor menu [ 0.612344] EFI Variables Facility v0.08 2004-May-17 [ 0.613796] TCP cubic registered [ 0.614725] NET: Registered protocol family 10 [ 0.616424] NET: Registered protocol family 17 [ 0.617529] Registering the dns_resolver key type [ 0.618738] registered taskstats version 1 [ 0.620058] input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1 [ 0.632593] Magic number: 13:704:718 [ 0.633646] rtc_cmos 00:01: setting system clock to 2013-01-07 05:43:17 UTC (1357537397) [ 0.635465] BIOS EDD facility v0.16 2004-Jun-25, 0 devices found [ 0.636804] EDD information not available. [ 0.777695] Freeing unused kernel memory: 900k freed [ 0.779097] Write protecting the kernel read-only data: 12288k [ 0.785924] Freeing unused kernel memory: 1968k freed [ 0.791628] Freeing unused kernel memory: 1368k freed

info: initramfs: up at 0.80 NOCHANGE: partition 1 is size 64260. it cannot be grown info: initramfs loading root from /dev/vda1 info: /etc/init.d/rc.sysinit: up at 1.08 [ 1.083332] EXT3-fs (vda1): warning: checktime reached, running e2fsck is recommended Starting logging: OK Initializing random number generator... done. Starting network... udhcpc (v1.18.5) started Sending discover... Sending discover... Sending discover... No lease, failing WARN: /etc/rc3.d/S40-network failed cloud-setup: checking http://169.254.169.254/2009-04-04/meta-data/instance-id wget: can't connect to remote host (169.254.169.254): Network is unreachable cloud-setup: failed 1/30: up 10.47. request failed wget: can't connect to remote host (169.254.169.254): Network is unreachable cloud-setup: failed 2/30: up 11.48. request failed wget: can't connect to remote host (169.254.169.254): Network is unreachable cloud-setup: failed 3/30: up 12.49. request failed wget: can't connect to remote host (169.254.169.254): Network is unreachable cloud-setup: failed 4/30: up 13.50. request failed wget: can't connect to remote host (169.254.169.254): Network is unreachable cloud-setup: failed 5/30: up 14.51. request failed wget: can't connect to remote host (169.254.169.254): Network is unreachable cloud-setup: failed 6/30: up 15.51. request failed wget: can't connect to remote host (169.254.169.254): Network is unreachable cloud-setup: failed 7/30: up 16.52. request failed wget: can't connect to remote host (169.254.169.254): Network is unreachable cloud-setup: failed 8/30: up 17.53. request failed wget: can't connect to remote host (169.254.169.254): Network is unreachable cloud-setup: failed 9/30: up 18.54. request failed wget: can't connect to remote host (169.254.169.254): Network is unreachable cloud-setup: failed 10/30: up 19.55. request failed wget: can't connect to remote host (169.254.169.254): Network is unreachable cloud-setup: failed 11/30: up 20.55. request failed wget: can't connect to remote host (169.254.169.254): Network is unreachable cloud-setup: failed 12/30: up 21.56. request failed wget: can't connect to remote host (169.254.169.254): Network is unreachable cloud-setup: failed 13/30: up 22.57. request failed wget: can't connect to remote host (169.254.169.254): Network is unreachable cloud-setup: failed 14/30: up 23.58. request failed wget: can't connect to remote host (169.254.169.254): Network is unreachable cloud-setup: failed 15/30: up 24.59. request failed wget: can't connect to remote host (169.254.169.254): Network is unreachable cloud-setup: failed 16/30: up 25.59. request failed wget: can't connect to remote host (169.254.169.254): Network is unreachable cloud-setup: failed 17/30: up 26.60. request failed wget: can't connect to remote host (169.254.169.254): Network is unreachable cloud-setup: failed 18/30: up 27.61. request failed wget: can't connect to remote host (169.254.169.254): Network is unreachable cloud-setup: failed 19/30: up 28.62. request failed wget: can't connect to remote host (169.254.169.254): Network is unreachable cloud-setup: failed 20/30: up 29.63. request failed wget: can't connect to remote host (169.254.169.254): Network is unreachable cloud-setup: failed 21/30: up 30.63. request failed wget: can't connect to remote host (169.254.169.254): Network is unreachable cloud-setup: failed 22/30: up 31.64. request failed wget: can't connect to remote host (169.254.169.254): Network is unreachable cloud-setup: failed 23/30: up 32.65. request failed wget: can't connect to remote host (169.254.169.254): Network is unreachable cloud-setup: failed 24/30: up 33.66. request failed wget: can't connect to remote host (169.254.169.254): Network is unreachable cloud-setup: failed 25/30: up 34.67. request failed wget: can't connect to remote host (169.254.169.254): Network is unreachable cloud-setup: failed 26/30: up 35.67. request failed wget: can't connect to remote host (169.254.169.254): Network is unreachable cloud-setup: failed 27/30: up 36.68. request failed wget: can't connect to remote host (169.254.169.254): Network is unreachable cloud-setup: failed 28/30: up 37.69. request failed wget: can't connect to remote host (169.254.169.254): Network is unreachable cloud-setup: failed 29/30: up 38.70. request failed wget: can't connect to remote host (169.254.169.254): Network is unreachable cloud-setup: failed 30/30: up 39.70. request failed cloud-setup: after 30 fails, debugging cloud-setup: running debug (30 tries reached) ############ debug start ##############

/etc/rc.d/init.d/sshd start

/etc/rc3.d/S45-cloud-setup: line 66: /etc/rc.d/init.d/sshd: not found route: fscanf

ifconfig -a

eth0 Link encap:Ethernet HWaddr FA:16:3E:EA:D0:B1
inet6 addr: fe80::f816:3eff:feea:d0b1/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:33 errors:0 dropped:0 overruns:0 frame:0 TX packets:8 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:6638 (6.4 KiB) TX bytes:1364 (1.3 KiB)

eth1 Link encap:Ethernet HWaddr FA:16:3E:67:A7:A3
BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)

lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)

route -n

Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface route: fscanf

cat /etc/resolv.conf

cat: can't open '/etc/resolv.conf': No such file or directory

gateway not found

/etc/rc3.d/S45-cloud-setup: line 66: can't open /etc/resolv.conf: no such file

pinging nameservers

uname -a

Linux cirros 3.0.0-12-virtual #20-Ubuntu SMP Fri Oct 7 18:19:02 UTC 2011 x86_64 GNU/Linux

lsmod

Module Size Used by Not tainted vfat 17585 0 fat 61475 1 vfat isofs 40253 0 ip_tables 27473 0 x_tables 29846 1 ip_tables pcnet32 42078 0 8139cp 27412 0 ne2k_pci 13691 0 8390 18856 1 ne2k_pci e1000 108573 0 acpiphp 24080 0

dmesg | tail

[ 1.303859] acpiphp: Slot [29] registered [ 1.303872] acpiphp: Slot [30] registered [ 1.303884] acpiphp: Slot [31] registered [ 1.313098] e1000: Intel(R) PRO/1000 Network Driver - version 7.3.21-k8-NAPI [ 1.313101] e1000: Copyright (c) 1999-2006 Intel Corporation. [ 1.319561] ne2k-pci.c:v1.03 9/22/2003 D. Becker/P. Gortmaker [ 1.324686] 8139cp: 8139cp: 10/100 PCI Ethernet driver v1.3 (Mar 22, 2004) [ 1.329925] pcnet32: pcnet32.c:v1.35 21.Apr.2008 tsbogend@alpha.franken.de [ 1.336594] ip_tables: (C) 2000-2006 Netfilter Core Team [ 11.816040] eth0: no IPv6 routers present

tail -n 25 /var/log/messages

Jan 6 22:43:18 cirros kern.info kernel: [ 0.908058] usb 1-1: new full speed USB device number 2 using uhci_hcd Jan 6 22:43:18 cirros kern.info kernel: [ 0.939665] EXT3-fs: barriers not enabled Jan 6 22:43:18 cirros kern.info kernel: [ 0.947280] kjournald starting. Commit interval 5 seconds Jan 6 22:43:18 cirros kern.info kernel: [ 0.947298] EXT3-fs (vda1): mounted filesystem with ordered data mode Jan 6 22:43:18 cirros kern.warn kernel: [ 1.083332] EXT3-fs (vda1): warning: checktime reached, running e2fsck is recommended Jan 6 22:43:18 cirros kern.info kernel: [ 1.210829] EXT3-fs (vda1): using internal journal Jan 6 22:43:18 cirros kern.info kernel: [ 1.256073] Refined TSC clocksource calibration: 2992.464 MHz. Jan 6 22:43:18 cirros kern.info kernel: [ 1.303329] acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 6 22:43:18 cirros kern.info kernel: [ 1.303450] acpiphp: Slot [1] registered Jan 6 22:43:18 cirros kern.info kernel: [ 1.303481] acpiphp: Slot [2] registered Jan 6 22:43:18 cirros kern.info kernel: [ 1.303522] acpiphp: Slot [3] registered Jan 6 22:43:18 cirros kern.info kernel: [ 1.303535] acpiphp: Slot [4] registered Jan 6 22:43:18 cirros kern.info kernel: [ 1.303548] acpiphp: Slot [5] registered Jan 6 22:43:18 cirros kern.info kernel: [ 1.303559] acpiphp: Slot [6] registered Jan 6 22:43:18 cirros kern.info kernel: [ 1.303571] acpiphp: Slot [7] registered Jan 6 22:43:18 cirros kern.info kernel: [ 1.303583] acpiphp: Slot [8] registered Jan 6 22:43:18 cirros kern.info kernel: [ 1.303593] acpiphp: Slot [9] registered Jan 6 22:43:18 cirros kern.info kernel: [ 1.303604] acpiphp: Slot [10] registered Jan 6 22:43:18 cirros kern.info kernel: [ 1.313098] e1000: Intel(R) PRO/1000 Network Driver - version 7.3.21-k8-NAPI Jan 6 22:43:18 cirros kern.info kernel: [ 1.313101] e1000: Copyright (c) 1999-2006 Intel Corporation. Jan 6 22:43:18 cirros kern.info kernel: [ 1.319561] ne2k-pci.c:v1.03 9/22/2003 D. Becker/P. Gortmaker Jan 6 22:43:18 cirros kern.info kernel: [ 1.324686] 8139cp: 8139cp: 10/100 PCI Ethernet driver v1.3 (Mar 22, 2004) Jan 6 22:43:18 cirros kern.info kernel: [ 1.329925] pcnet32: pcnet32.c:v1.35 21.Apr.2008 tsbogend@alpha.franken.de Jan 6 22:43:18 cirros kern.info kernel: [ 1.336594] ip_tables: (C) 2000-2006 Netfilter Core Team Jan 6 22:43:28 cirros kern.debug kernel: [ 11.816040] eth0: no IPv6 routers present ############ debug end ############## cloud-setup: failed to read iid from metadata. tried 30 WARN: /etc/rc3.d/S45-cloud-setup failed Starting dropbear sshd: generating rsa key... generating dsa key... OK ===== cloud-final: system completely up in 40.92 seconds ==== wget: can't connect to remote host (169.254.169.254): Network is unreachable wget: can't connect to remote host (169.254.169.254): Network is unreachable wget: can't connect to remote host (169.254.169.254): Network is unreachable instance-id: public-ipv4: local-ipv4 : wget: can't connect to remote host (169.254.169.254): Network is unreachable cloud-userdata: failed to read instance id WARN: /etc/rc3.d/S99-cloud-userdata failed


/ / __ / \/ / / / / // // _// // /\ \ _///// /_/ _// http://launchpad.net/cirros

login as 'cirros' user. default password: 'cubswin:)'. use 'sudo' for root.

cirros login:

bilelmsekni commented 11 years ago

Hello enarciso,

Your VM can't reach the metadata server because she doesn't have an IP address ! Please make sure that all quantum components are working correctly to get the IP address.

regards, Bilel

----- Mail original -----

De: "enarciso" notifications@github.com À: "mseknibilel/OpenStack-Folsom-Install-guide" OpenStack-Folsom-Install-guide@noreply.github.com Envoyé: Samedi 5 Janvier 2013 02:41:20 Objet: {Disarmed} [OpenStack-Folsom-Install-guide] no IP on VM, Can't reach the metadata server, can't ping & ssh (#32)

Great document! But I'm having trouble towards the end and would appreciate if you can provide some insights. I followed the VLAN/2NICs branch but I'm unable to get an IP allocated to my VM (50.50.1.0/24). VM instance console shows that the instance cannot reach the metadata and followed your recommendations from #22 and confirmed that from my network node I can reach the metadata server. Below are a few config/outputs from my nodes; appreciate all the help. Thank you Controller Node root@controller:~# route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 0.0.0.0 10.46.200.1 0.0.0.0 UG 100 0 0 eth0 10.46.200.0 0.0.0.0 255.255.254.0 U 0 0 0 eth0 50.50.1.0 10.46.200.1 255.255.255.0 UG 0 0 0 eth0 100.10.10.0 0.0.0.0 255.255.255.0 U 0 0 0 eth1 root@controller:~# ovs-vsctl show 304d97ba-a9b2-499e-b9f0-afefa1eb17a5 ovs_version: "1.4.0+build0" root@controller:~# grep -v "#" /etc/quantum/api-paste.ini [composite:quantum] /: quantumversions /v2.0: quantumapi_v2_0

[composite:quantumapi_v2_0] use = call:quantum.auth:pipeline_factory noauth = extensions quantumapiapp_v2_0 keystone = authtoken keystonecontext extensions quantumapiapp_v2_0

[filter:keystonecontext] paste.filter_factory = quantum.auth:QuantumKeystoneContext.factory

[filter:authtoken] paste.filter_factory = keystone.middleware.auth_token:filter_factory auth_host = 100.10.10.51 auth_port = 35357 auth_protocol = http admin_tenant_name = service admin_user = quantum admin_password = openstack

[filter:extensions] paste.filter_factory = quantum.extensions.extensions:plugin_aware_extension_middleware_factory

[app:quantumversions] paste.app_factory = quantum.api.versions:Versions.factory

[app:quantumapiapp_v2_0] paste.app_factory = quantum.api.v2.router:APIRouter.factory Network Node root@Network:~# route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 0.0.0.0 10.46.200.1 0.0.0.0 UG 100 0 0 br-ex 10.46.200.0 0.0.0.0 255.255.254.0 U 0 0 0 br-ex 100.10.10.0 0.0.0.0 255.255.255.0 U 0 0 0 br-eth1 root@Network:~# ovs-vsctl show bc8dc4db-e84d-4df9-885d-66547c9411f8 Bridge br-ex Port "qg-9791a7f9-cd" Interface "qg-9791a7f9-cd" type: internal Port br-ex Interface br-ex type: internal Port "eth0" Interface "eth0" Bridge br-int Port "tap0bbc49ed-d6" tag: 1 Interface "tap0bbc49ed-d6" type: internal Port "qr-e736f376-63" tag: 1 Interface "qr-e736f376-63" type: internal Port "int-br-eth1" Interface "int-br-eth1" Port br-int Interface br-int type: internal Bridge "br-eth1" Port "phy-br-eth1" Interface "phy-br-eth1" Port "br-eth1" Interface "br-eth1" type: internal Port "eth1" Interface "eth1" ovs_version: "1.4.0+build0" root@Network:~# grep -v "#" /etc/quantum/l3_agent.ini [DEFAULT]

interface_driver = quantum.agent.linux.interface.OVSInterfaceDriver

auth_url = http://100.10.10.51:35357/v2.0 auth_region = RegionOne admin_tenant_name = service admin_user = quantum admin_password = openstack metadata_ip = 10.46.200.193 metadata_port = 8775

root_helper = sudo /usr/bin/quantum-rootwrap /etc/quantum/rootwrap.conf Compute Node root@compute:~# route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 0.0.0.0 10.46.200.1 0.0.0.0 UG 100 0 0 eth0 10.46.200.0 0.0.0.0 255.255.254.0 U 0 0 0 eth0 100.10.10.0 0.0.0.0 255.255.255.0 U 0 0 0 br-eth1 root@compute:~# ovs-vsctl show fb3124bd-56fb-46ea-ae56-60d3f7af112a Bridge br-int Port br-int Interface br-int type: internal Port "qvo77fc2e6f-35" tag: 1 Interface "qvo77fc2e6f-35" Port "int-br-eth1" Interface "int-br-eth1" Port "qvo6e4b4d01-18" tag: 1 Interface "qvo6e4b4d01-18" Port "qvo3650512e-f8" tag: 1 Interface "qvo3650512e-f8" Bridge "br-eth1" Port "br-eth1" Interface "br-eth1" type: internal Port "eth1" Interface "eth1" Port "phy-br-eth1" Interface "phy-br-eth1" ovs_version: "1.4.0+build0" Checking metadata support: root@network:/etc/quantum# ping 10.46.200.193 PING 10.46.200.193 (10.46.200.193) 56(84) bytes of data. 64 bytes from 10.46.200.193: icmp_req=1 ttl=64 time=0.263 ms ^C --- 10.46.200.193 ping statistics --- 1 packets transmitted, 1 received, 0% packet loss, time 0ms rtt min/avg/max/mdev = 0.263/0.263/0.263/0.000 ms

root@network:/etc/quantum# ip netns exec qrouter-91ad119b-83ad-41b9-91ac-04248f943fb6 ping 10.46.200.193 PING 10.46.200.193 (10.46.200.193) 56(84) bytes of data. 64 bytes from 10.46.200.193: icmp_req=1 ttl=64 time=11.3 ms 64 bytes from 10.46.200.193: icmp_req=2 ttl=64 time=0.244 ms — Reply to this email directly or view it on GitHub . Web Bug from https://github.com/notifications/beacon/Jshd8sI44GVrKZBvymxqKA6IGR-xf0zwaGtDB_8nsHze1kfCuKoti4k7HfUkDVsY.gif

mav910623 commented 11 years ago

@mseknibilel , i get an ip assigned, but i have to manually update the vm with the ip in the interface file of the cirros, im not getting to the metadata server still

enarciso commented 11 years ago

@mseknibilel; It seems the VM instance was unable to reach the dnsmasq running in the network node and I verified that this process is working. The VM console is similar to what pythoner provided.

Specifically this...

Starting network...
udhcpc (v1.18.5) started
Sending discover...
Sending discover...
Sending discover...
No lease, failing
WARN: /etc/rc3.d/S40-network failed
cloud-setup: checking http://169.254.169.254/2009-04-04/meta-data/instance-id
wget: can't connect to remote host (169.254.169.254): Network is unreachable
cloud-setup: failed 1/30: up 10.47. request failed
wget: can't connect to remote host (169.254.169.254): Network is unreachable
cloud-setup: failed 2/30: up 11.48. request failed

This is on my network node:

root@network:~# ps -ef |grep -i dnsm
root      1026 19411  0 15:05 pts/0    00:00:00 grep --color=auto -i dnsm
nobody    1962     1  0 Jan04 ?        00:00:00 dnsmasq --no-hosts --no-resolv --strict-order --bind-interfaces --interface=tap0bbc49ed-d6 --except-interface=lo --domain=openstacklocal --pid-file=/var/lib/quantum/dhcp/a04a7ebc-53d5-4285-add0-5a007fbe042b/pid --dhcp-hostsfile=/var/lib/quantum/dhcp/a04a7ebc-53d5-4285-add0-5a007fbe042b/host --dhcp-optsfile=/var/lib/quantum/dhcp/a04a7ebc-53d5-4285-add0-5a007fbe042b/opts --dhcp-script=/usr/bin/quantum-dhcp-agent-dnsmasq-lease-update --leasefile-ro --dhcp-range=set:tag0,50.50.1.0,static,120s
root      1963  1962  0 Jan04 ?        00:00:00 dnsmasq --no-hosts --no-resolv --strict-order --bind-interfaces --interface=tap0bbc49ed-d6 --except-interface=lo --domain=openstacklocal --pid-file=/var/lib/quantum/dhcp/a04a7ebc-53d5-4285-add0-5a007fbe042b/pid --dhcp-hostsfile=/var/lib/quantum/dhcp/a04a7ebc-53d5-4285-add0-5a007fbe042b/host --dhcp-optsfile=/var/lib/quantum/dhcp/a04a7ebc-53d5-4285-add0-5a007fbe042b/opts --dhcp-script=/usr/bin/quantum-dhcp-agent-dnsmasq-lease-update --leasefile-ro --dhcp-range=set:tag0,50.50.1.0,static,120s

and associated my floating ip with the correct fixed IP

root@controller:~# quantum floatingip-list
+--------------------------------------+------------------+---------------------+--------------------------------------+
| id                                   | fixed_ip_address | floating_ip_address | port_id                              |
+--------------------------------------+------------------+---------------------+--------------------------------------+
| 114b0ec3-5ac6-4a48-a100-a9340fdfced2 | 50.50.1.3        | 10.46.200.243       | e265ffc3-46e1-4379-b31b-adb2e7dde619 |
| 273971ba-9e55-4d6b-8025-1523bb7ebc24 |                  | 10.46.200.242       |                                      |
| 389d7756-4d4a-4705-8f07-904ae997f749 |                  | 10.46.200.245       |                                      |
| cd7d5b5e-16ad-4443-90c5-b62b61122493 |                  | 10.46.200.246       |                                      |
| d0098794-b2bd-4ca9-8ce7-515993e0e114 |                  | 10.46.200.247       |                                      |
| f893e4cc-81c2-4809-bd38-17c5587a0b22 |                  | 10.46.200.244       |                                      |
+--------------------------------------+------------------+---------------------+--------------------------------------+
root@controller:~# ping 10.46.200.243
PING 10.46.200.243 (10.46.200.243) 56(84) bytes of data.
From 10.46.200.243 icmp_seq=1 Destination Host Unreachable
From 10.46.200.243 icmp_seq=2 Destination Host Unreachable
From 10.46.200.243 icmp_seq=3 Destination Host Unreachable

I'll double-check all the quantum components on all 3 servers and get back to you.

Thank you.

PS, in the 3.0 Section (Network Node); the numbering skipped 3.3.

_Also in 3.4 am I suppose to lose network connectivity when ovs-vsctl add-port br-ex eth2 is ran? *_To solve this I had to bring eth2 down, remove eth2 config, add br-ex in /etc/network/interfaces then run the command above.

auto eth2
iface eth2 inet manual

auto br-ex
iface br-ex inet static
        address 10.46.200.142
        ...
pythoner commented 11 years ago

enarciso, Can you ping 50.50.1.3 after associating floating ip with vm port?

enarciso commented 11 years ago

sorry; accidentally closed the issue.

@pythoner ; when the VM is initialized for the first time its unable to reach the dhcp server to allocate an IP. However even if I switch the VM instance to static (50.50.1.3) I am still unable to reach the metadata server. I'll have to run through the quantum components to see if I missed anything. Also, no I can't reach the VM even after I put a static IP address and associate the floating ip address to the fixed IP address. Logs from network node:/var/log/quantum/l3-agent.log shows correct allocation

root@network:/var/log/quantum# grep -i 'e265ffc3-46e1-4379-b31b-adb2e7dde619' *.log
l3-agent.log:2013-01-07 15:26:13    DEBUG [quantumclient.client] RESP BODY:{"floatingips": [{"router_id": "91ad119b-83ad-41b9-91ac-04248f943fb6", "tenant_id": "5e79d964259d4ae0a443af5b83a8c7e5", "floating_network_id": "b00c9dac-dc45-4168-8454-704e9ace73c6", "fixed_ip_address": "50.50.1.3", "floating_ip_address": "10.46.200.243", "port_id": "e265ffc3-46e1-4379-b31b-adb2e7dde619", "id": "114b0ec3-5ac6-4a48-a100-a9340fdfced2"}]}
bilelmsekni commented 11 years ago

50.50.1.3 must be pinged from the controller node !

ping 50.50.1.3 (if you are using the GRE mode)

Set the namespace and then ping 50.50.1.3 if you are using the VLAN mode

----- Mail original -----

De: "pythoner" notifications@github.com À: "mseknibilel/OpenStack-Folsom-Install-guide" OpenStack-Folsom-Install-guide@noreply.github.com Cc: "SkiBLE" bilel.msekni@telecom-sudparis.eu Envoyé: Mardi 8 Janvier 2013 04:31:25 Objet: {Disarmed} Re: [OpenStack-Folsom-Install-guide] no IP on VM, Can't reach the metadata server, can't ping & ssh (#32)

enarciso, Can you ping 50.50.1.3 after associating floating ip with vm port? This is my network configure in network node.

— Reply to this email directly or view it on GitHub . Web Bug from https://github.com/notifications/beacon/Jshd8sI44GVrKZBvymxqKA6IGR-xf0zwaGtDB_8nsHze1kfCuKoti4k7HfUkDVsY.gif

pythoner commented 11 years ago

@mseknibilel, I can ping 50.50.1.2 without updating route table. After executing the command "add route -net 50.50.1.0/24 gw 172.35.16.35.122" can not ping 50.50.1.2 any more. 172.16.35.0/24 is used for external network.

pythoner commented 11 years ago

@mseknibilel, can not ping floating ip. when i try run command 'nova ssh --login cirros vm_id ', it show that "ERROR: No public addresses found for XXXXX". The followings are the steps I create floating ip and associate it with instance. Is there anything wrong?

root@openstack-OptiPlex-780:/home/openstack# nova list --all-tenant 1
+--------------------------------------+------+--------+------------------------+
| ID                                   | Name | Status | Networks               |
+--------------------------------------+------+--------+------------------------+
| c3f818c0-19ce-47e8-abb2-4bf51ef63868 | tt   | ACTIVE | net_proj_one=50.50.1.2 |
+--------------------------------------+------+--------+------------------------+

root@openstack-OptiPlex-780:/home/openstack# ping 50.50.1.2
PING 50.50.1.2 (50.50.1.2) 56(84) bytes of data.
From 172.16.35.15: icmp_seq=1 Redirect Host(New nexthop: 172.16.35.1)
64 bytes from 50.50.1.2: icmp_req=1 ttl=107 time=416 ms
From 172.16.35.15: icmp_seq=2 Redirect Host(New nexthop: 172.16.35.1)
64 bytes from 50.50.1.2: icmp_req=3 ttl=107 time=393 ms

root@openstack-OptiPlex-780:/home/openstack# quantum floatingip-create --tenant-id 9be5421ac04f4aa9a7a93f3cb072e2a5 ext_net
Created a new floatingip:
+---------------------+--------------------------------------+
| Field               | Value                                |
+---------------------+--------------------------------------+
| fixed_ip_address    |                                      |
| floating_ip_address | 172.16.35.17                         |
| floating_network_id | d85e9921-c9ac-4eec-97eb-ef6b6a9830e7 |
| id                  | adf52651-5972-442e-9ca9-1ab80ecabe08 |
| port_id             |                                      |
| router_id           |                                      |
| tenant_id           | 9be5421ac04f4aa9a7a93f3cb072e2a5     |
+---------------------+--------------------------------------+

root@openstack-OptiPlex-780:/home/openstack# quantum port-list -- --device_id c3f818c0-19ce-47e8-abb2-4bf51ef63868
+--------------------------------------+------+-------------------+----------------------------------------------------------------------------------+
| id                                   | name | mac_address       | fixed_ips                                                                        |
+--------------------------------------+------+-------------------+----------------------------------------------------------------------------------+
| 62c20f8e-ae2a-4358-8143-d454a6cae22b |      | fa:16:3e:03:23:1f | {"subnet_id": "d4513fdf-e3db-4871-88d3-a97a8289f92b", "ip_address": "50.50.1.2"} |
+--------------------------------------+------+-------------------+----------------------------------------------------------------------------------+

root@openstack-OptiPlex-780:/home/openstack# quantum floatingip-associate adf52651-5972-442e-9ca9-1ab80ecabe08 62c20f8e-ae2a-4358-8143-d454a6cae22b
Associated floatingip adf52651-5972-442e-9ca9-1ab80ecabe08

root@openstack-OptiPlex-780:/home/openstack# quantum floatingip-list
+--------------------------------------+------------------+---------------------+--------------------------------------+
| id                                   | fixed_ip_address | floating_ip_address | port_id                              |
+--------------------------------------+------------------+---------------------+--------------------------------------+
| adf52651-5972-442e-9ca9-1ab80ecabe08 | 50.50.1.2        | 172.16.35.17        | 62c20f8e-ae2a-4358-8143-  d454a6cae22b |
+--------------------------------------+------------------+---------------------+--------------------------------------+

root@openstack-OptiPlex-780:/home/openstack# nova ssh --login cirros c3f818c0-19ce-47e8-abb2-4bf51ef63868
ERROR: No public addresses found for 'c3f818c0-19ce-47e8-abb2-4bf51ef63868'.
pythoner commented 11 years ago

@enarciso, Do you set use_namespaces=True in l3_agent.ini, try to change it to false, and then restart the quantum-l3-agent service.

enarciso commented 11 years ago

@pythoner, that did not solve it. I'll have to re-do this in vmware fusion to see if there's anything I missed from the guide or if its the underlying network of my physical dev machines. Thanks

bilelmsekni commented 11 years ago

Try ping 172.16.35.17 If it responds do : ssh cirros@172.16.35.17 if it doesn't respond, check if you have added the route to the controller node.

regards,

----- Mail original -----

De: "pythoner" notifications@github.com À: "mseknibilel/OpenStack-Folsom-Install-guide" OpenStack-Folsom-Install-guide@noreply.github.com Cc: "SkiBLE" bilel.msekni@telecom-sudparis.eu Envoyé: Mercredi 9 Janvier 2013 04:21:15 Objet: {Disarmed} Re: [OpenStack-Folsom-Install-guide] no IP on VM, Can't reach the metadata server, can't ping & ssh (#32)

@mseknibilel , can not ping floating ip. when i try run command 'nova ssh --login cirros vm_id ', it show that "ERROR: No public addresses found for XXXXX". The followings are the steps I create floating ip and associate it with instance. Is there anything wrong? root@openstack-OptiPlex-780:/home/openstack# nova list --all-tenant 1 +--------------------------------------+------+--------+------------------------+ | ID | Name | Status | Networks | +--------------------------------------+------+--------+------------------------+ | c3f818c0-19ce-47e8-abb2-4bf51ef63868 | tt | ACTIVE | net_proj_one=50.50.1.2 | +--------------------------------------+------+--------+------------------------+

root@openstack-OptiPlex-780:/home/openstack# ping 50.50.1.2 PING 50.50.1.2 (50.50.1.2) 56(84) bytes of data. From 172.16.35.15: icmp_seq=1 Redirect Host(New nexthop: 172.16.35.1) 64 bytes from 50.50.1.2: icmp_req=1 ttl=107 time=416 ms From 172.16.35.15: icmp_seq=2 Redirect Host(New nexthop: 172.16.35.1) 64 bytes from 50.50.1.2: icmp_req=3 ttl=107 time=393 ms

root@openstack-OptiPlex-780:/home/openstack# quantum floatingip-create --tenant-id 9be5421ac04f4aa9a7a93f3cb072e2a5 ext_net Created a new floatingip: +---------------------+--------------------------------------+ | Field | Value | +---------------------+--------------------------------------+ | fixed_ip_address | | | floating_ip_address | 172.16.35.17 | | floating_network_id | d85e9921-c9ac-4eec-97eb-ef6b6a9830e7 | | id | adf52651-5972-442e-9ca9-1ab80ecabe08 | | port_id | | | router_id | | | tenant_id | 9be5421ac04f4aa9a7a93f3cb072e2a5 | +---------------------+--------------------------------------+

root@openstack-OptiPlex-780:/home/openstack# quantum port-list -- --device_id c3f818c0-19ce-47e8-abb2-4bf51ef63868 +--------------------------------------+------+-------------------+----------------------------------------------------------------------------------+ | id | name | mac_address | fixed_ips | +--------------------------------------+------+-------------------+----------------------------------------------------------------------------------+ | 62c20f8e-ae2a-4358-8143-d454a6cae22b | | fa:16:3e:03:23:1f | {"subnet_id": "d4513fdf-e3db-4871-88d3-a97a8289f92b", "ip_address": "50.50.1.2"} | +--------------------------------------+------+-------------------+----------------------------------------------------------------------------------+

root@openstack-OptiPlex-780:/home/openstack# quantum floatingip-associate adf52651-5972-442e-9ca9-1ab80ecabe08 62c20f8e-ae2a-4358-8143-d454a6cae22b Associated floatingip adf52651-5972-442e-9ca9-1ab80ecabe08

root@openstack-OptiPlex-780:/home/openstack# quantum floatingip-list +--------------------------------------+------------------+---------------------+--------------------------------------+ | id | fixed_ip_address | floating_ip_address | port_id | +--------------------------------------+------------------+---------------------+--------------------------------------+ | adf52651-5972-442e-9ca9-1ab80ecabe08 | 50.50.1.2 | 172.16.35.17 | 62c20f8e-ae2a-4358-8143- d454a6cae22b | +--------------------------------------+------------------+---------------------+--------------------------------------+

root@openstack-OptiPlex-780:/home/openstack# nova ssh --login cirros c3f818c0-19ce-47e8-abb2-4bf51ef63868 ERROR: No public addresses found for 'c3f818c0-19ce-47e8-abb2-4bf51ef63868'. — Reply to this email directly or view it on GitHub . Web Bug from https://github.com/notifications/beacon/I1T5EjfDb5OWgQ2B_759LUqTZ2BaXpmjQBErwbFKRATvwqMSzHBZI7blipuvgGOA.gif

enarciso commented 11 years ago

So on VMware fusion the VLAN/2NICs guide seem to have worked, by that I mean I was able to ping a floatingip associated to an instance but was unable to ssh in (vm related; outside the scope of this guide). But I did notice that the quantum's modules doesn't get compiled properly without build-essential and linux-header-uname -r installed first. I had to apt-get remove the quantum packages, install build-essential and linux-header-uname -r; then reinstall it again. As far as I know these are dependent packages to the quantum dkms package.

I'll re-do my physical dev systems and get back to everyone.

Thank you

pythoner commented 11 years ago

@mseknibilel, It has no response when ping 172.16.35.17. I was confused by ssh. Does the ssh supported by metadata service? In other words, I can not ssh till assign a floating IP to instance. Is that right?

bilelmsekni commented 11 years ago

When the VM is launched, it takes a fixed IP address. Thanks to that address, it can talk to the metadaserver to get the informations needed about the user, network, ...etc. This will allow it to be internet pingable as well as accept SSH. Fix the metadata server problem and you will fix all the related problems.

regards,

----- Mail original -----

De: "pythoner" notifications@github.com À: "mseknibilel/OpenStack-Folsom-Install-guide" OpenStack-Folsom-Install-guide@noreply.github.com Cc: "SkiBLE" bilel.msekni@telecom-sudparis.eu Envoyé: Jeudi 10 Janvier 2013 03:40:36 Objet: {Disarmed} Re: [OpenStack-Folsom-Install-guide] no IP on VM, Can't reach the metadata server, can't ping & ssh (#32)

@mseknibilel , It has no response when ping 172.16.35.17. I was confused by ssh. Does the ssh supported by metadata service? In other words, I can not ssh till assign a floating IP to instance. Is that right? — Reply to this email directly or view it on GitHub . Web Bug from https://github.com/notifications/beacon/I1T5EjfDb5OWgQ2B_759LUqTZ2BaXpmjQBErwbFKRATvwqMSzHBZI7blipuvgGOA.gif

enarciso commented 11 years ago

I have gone up, down, left & right on this guide, compared it with my VMware fusion's equivalent with no resolve. While my vmware fusion is working correctly I don't have the slightest idea why my dev (physical) machines does not work. The floating ip allocation and vmware fusion is using qemu rather than kvm are the only attributes that differs. Both VMware and physical dev machine are behind a NAT subnet. However my dev machines are in a /23 network and allocated only 9 ip. So my "quantum subnet-create" is like this: quantum subnet-create --tenant-id $service_tenant_uuid --allocation-pool start=10.46.200.241,end=10.46.200.250 --gateway 10.46.200.1 ext_net 10.46.200.0/24 --enable_dhcp=False

I do notice that it pre-allocate .241 and replies to ping!

Am I suppose to allocate the entire subnet (/23) for this and quantum is assuming .241 as the GW eventhough "quantum subnet-show" states that it has the correct GW?

Thank you for Bilel and pythoner's patience.

root@controller:~# quantum port-list
+--------------------------------------+------+-------------------+--------------------------------------------------------------------------------------+
| id                                   | name | mac_address       | fixed_ips                                                                            |
+--------------------------------------+------+-------------------+--------------------------------------------------------------------------------------+
| 0a5d4249-7cf3-4fb7-ae16-55b9a31e9847 |      | fa:16:3e:76:e4:b1 | {"subnet_id": "73f91cb9-663b-4c3f-bfba-8655aa204337", "ip_address": "10.46.200.241"} |
| 61effcc3-39d7-4d25-b3f0-bba75a38ec94 |      | fa:16:3e:f5:eb:83 | {"subnet_id": "cdb195a8-c1e2-4b81-b1ba-87487b7c5846", "ip_address": "50.50.1.1"}     |
| b1036760-893b-4172-91b6-9346bb0845bd |      | fa:16:3e:e5:73:ca | {"subnet_id": "cdb195a8-c1e2-4b81-b1ba-87487b7c5846", "ip_address": "50.50.1.2"}     |
| eeaa628c-344f-4416-bf0b-7ffed7efdb60 |      | fa:16:3e:31:df:bf | {"subnet_id": "cdb195a8-c1e2-4b81-b1ba-87487b7c5846", "ip_address": "50.50.1.3"}     |
+--------------------------------------+------+-------------------+--------------------------------------------------------------------------------------+
root@controller:~# quantum subnet-list
+--------------------------------------+------+----------------+----------------------------------------------------+
| id                                   | name | cidr           | allocation_pools                                   |
+--------------------------------------+------+----------------+----------------------------------------------------+
| 73f91cb9-663b-4c3f-bfba-8655aa204337 |      | 10.46.200.0/24 | {"start": "10.46.200.241", "end": "10.46.200.250"} |
| cdb195a8-c1e2-4b81-b1ba-87487b7c5846 |      | 50.50.1.0/24   | {"start": "50.50.1.2", "end": "50.50.1.254"}       |
+--------------------------------------+------+----------------+----------------------------------------------------+
root@controller:~# quantum subnet-show 73f91cb9-663b-4c3f-bfba-8655aa204337
+------------------+----------------------------------------------------+
| Field            | Value                                              |
+------------------+----------------------------------------------------+
| allocation_pools | {"start": "10.46.200.241", "end": "10.46.200.250"} |
| cidr             | 10.46.200.0/24                                     |
| dns_nameservers  |                                                    |
| enable_dhcp      | False                                              |
| gateway_ip       | 10.46.200.1                                        |
| host_routes      |                                                    |
| id               | 73f91cb9-663b-4c3f-bfba-8655aa204337               |
| ip_version       | 4                                                  |
| name             |                                                    |
| network_id       | 0f18428d-ad95-43f1-b83b-51170eaa679a               |
| tenant_id        | fd0e24fd1d5d40baaa1c8a7f85e27d5a                   |
+------------------+----------------------------------------------------+
bilelmsekni commented 11 years ago

What you have done is correct. If you have 9 Floating IPs, it won't matter what you put /24 or /23 (honestly i prefer/24). As for the first IP address 10.46.200.241, it goes to the virtual gateway ( device name is qg refers to Quantum Gateway), than the remaining 8 IP adresses could be given to VMs.

regards, ----- Mail original -----

De: "enarciso" notifications@github.com À: "mseknibilel/OpenStack-Folsom-Install-guide" OpenStack-Folsom-Install-guide@noreply.github.com Cc: "SkiBLE" bilel.msekni@telecom-sudparis.eu Envoyé: Vendredi 11 Janvier 2013 05:01:11 Objet: {Disarmed} Re: [OpenStack-Folsom-Install-guide] no IP on VM, Can't reach the metadata server, can't ping & ssh (#32)

I have gone up, down, left & right on this guide, compared it with my VMware fusion's equivalent with no resolve. While my vmware fusion is working correctly I don't have the slightest idea why my dev (physical) machines does not work. The only two thing that is different is the floating ip allocation and vmware fusion is using qemu rather than kvm. Both VMware and physical dev machine are behind a NAT subnet. However my dev machines are in a /23 network and allocated only 9 ip. So my "quantum subnet-create" is like this: quantum subnet-create --tenant-id $service_tenant_uuid --allocation-pool start=10.46.200.241,end=10.46.200.250 --gateway 10.46.200.1 ext_net 10.46.200.0/24 --enable_dhcp=False I do notice that it pre-allocate .241 and replies to ping! Am I suppose to allocate the entire subnet (/23) for this and quantum is assuming .241 as the GW eventhough "quantum subnet-show" states that it has the correct GW? Thank you for Bilel and pythoner's patience. root@controller:~# quantum port-list +--------------------------------------+------+-------------------+--------------------------------------------------------------------------------------+ | id | name | mac_address | fixed_ips | +--------------------------------------+------+-------------------+--------------------------------------------------------------------------------------+ | 0a5d4249-7cf3-4fb7-ae16-55b9a31e9847 | | fa:16:3e:76:e4:b1 | {"subnet_id": "73f91cb9-663b-4c3f-bfba-8655aa204337", "ip_address": "10.46.200.241"} | | 61effcc3-39d7-4d25-b3f0-bba75a38ec94 | | fa:16:3e:f5:eb:83 | {"subnet_id": "cdb195a8-c1e2-4b81-b1ba-87487b7c5846", "ip_address": "50.50.1.1"} | | b1036760-893b-4172-91b6-9346bb0845bd | | fa:16:3e:e5:73:ca | {"subnet_id": "cdb195a8-c1e2-4b81-b1ba-87487b7c5846", "ip_address": "50.50.1.2"} | | eeaa628c-344f-4416-bf0b-7ffed7efdb60 | | fa:16:3e:31:df:bf | {"subnet_id": "cdb195a8-c1e2-4b81-b1ba-87487b7c5846", "ip_address": "50.50.1.3"} | +--------------------------------------+------+-------------------+--------------------------------------------------------------------------------------+ root@controller:~# quantum subnet-list +--------------------------------------+------+----------------+----------------------------------------------------+ | id | name | cidr | allocation_pools | +--------------------------------------+------+----------------+----------------------------------------------------+ | 73f91cb9-663b-4c3f-bfba-8655aa204337 | | 10.46.200.0/24 | {"start": "10.46.200.241", "end": "10.46.200.250"} | | cdb195a8-c1e2-4b81-b1ba-87487b7c5846 | | 50.50.1.0/24 | {"start": "50.50.1.2", "end": "50.50.1.254"} | +--------------------------------------+------+----------------+----------------------------------------------------+ root@controller:~# quantum subnet-show 73f91cb9-663b-4c3f-bfba-8655aa204337 +------------------+----------------------------------------------------+ | Field | Value | +------------------+----------------------------------------------------+ | allocation_pools | {"start": "10.46.200.241", "end": "10.46.200.250"} | | cidr | 10.46.200.0/24 | | dns_nameservers | | | enable_dhcp | False | | gateway_ip | 10.46.200.1 | | host_routes | | | id | 73f91cb9-663b-4c3f-bfba-8655aa204337 | | ip_version | 4 | | name | | | network_id | 0f18428d-ad95-43f1-b83b-51170eaa679a | | tenant_id | fd0e24fd1d5d40baaa1c8a7f85e27d5a | +------------------+----------------------------------------------------+ — Reply to this email directly or view it on GitHub . Web Bug from https://github.com/notifications/beacon/I1T5EjfDb5OWgQ2B_759LUqTZ2BaXpmjQBErwbFKRATvwqMSzHBZI7blipuvgGOA.gif

enarciso commented 11 years ago

Closing this case; trying out the master branch and using nova-network instead of quantum. Thank you Bilel and pythoner for all the help