nii-cloud / dodai-deploy

Deployment Tool for OpenStack(Nova, Glance and Swift) and Hadoop using Puppet
https://github.com/nii-cloud/dodai-deploy/wiki
68 stars 25 forks source link

dodai-deploy on 2 nodes #20

Closed guanxiaohua2k6 closed 12 years ago

guanxiaohua2k6 commented 12 years ago

Hi,

as you suggested I'm contacting you by email about a problem I'm facing with my dodai-deploy installation. http://www.guanxiaohua2k6.com/2012/04/install-openstack-nova-essexmultiple.html?showComment=1342077925664#c6214097199447870768

After I checked both servers where runing I was able too install keystone on the localhost, but when I tried to install glance it showed the proposal as installed but on the server glance is nonexistant. the job_server.log is :

{:operation=>"install", :params=>{:proposal_id=>"2"}} Start install[proposal - 2] install components Determining the amount of hosts matching filter for 5 seconds .... 1

1 / 1

[#<MCollective::RPC::Result:0x7f94d0cb51f8 @results={:data=>{:output=>"\e[0;32minfo: Creating a new SSL key for server2.gemalto.com\e[0m\n\e[0;32minfo: Caching certificate for ca\e[0m\n\e[0;32minfo: Creating a new SSL certificate request for server2.gemalto.com\e[0m\n\e[0;32minfo: Certificate Request fingerprint (md5): 7C:8A:3B:7C:B2:4E:4A:62:03:FF:F9:95:49:83:03:CB\e[0m\n\e[0;32minfo: Caching certificate for server2.gemalto.com\e[0m\n\e[0;32minfo: Caching certificate_revocation_list for ca\e[0m\n\e[0;32minfo: Caching catalog for server2.gemalto.com\e[0m\n\e[0;32minfo: Applying configuration version '1342079262'\e[0m\n\e[0;32minfo: Creating state file /var/lib/puppet/state/state.yaml\e[0m\n\e[0;36mnotice: Finished catalog run in 0.03 seconds\e[0m\n"}, :statuscode=>0, :statusmsg=>"OK", :sender=>"Server2.gemalto.com"}, @action="runonce", @agent="puppetd">] --- !ruby/object:MCollective::RPC::Result action: runonce agent: puppetd results: :data: :output: "\e[0;32minfo: Creating a new SSL key for server2.gemalto.com\e[0m\n\ \e[0;32minfo: Caching certificate for ca\e[0m\n\ \e[0;32minfo: Creating a new SSL certificate request for server2.gemalto.com\e[0m\n\ \e[0;32minfo: Certificate Request fingerprint (md5): 7C:8A:3B:7C:B2:4E:4A:62:03:FF:F9:95:49:83:03:CB\e[0m\n\ \e[0;32minfo: Caching certificate for server2.gemalto.com\e[0m\n\ \e[0;32minfo: Caching certificate_revocation_list for ca\e[0m\n\ \e[0;32minfo: Caching catalog for server2.gemalto.com\e[0m\n\ \e[0;32minfo: Applying configuration version '1342079262'\e[0m\n\ \e[0;32minfo: Creating state file /var/lib/puppet/state/state.yaml\e[0m\n\ \e[0;36mnotice: Finished catalog run in 0.03 seconds\e[0m\n" :statuscode: 0 :statusmsg: OK :sender: Server2.gemalto.com install[proposal - 2] finished {:operation=>"test", :params=>{:proposal_id=>"2"}} Start test[proposal - 2] Determining the amount of hosts matching filter for 5 seconds .... 1

1 / 1

[#<MCollective::RPC::Result:0x7f94d0c0e3f8 @results={:data=>{:output=>"\e[0;32minfo: Caching catalog for server2.gemalto.com\e[0m\n\e[0;32minfo: Applying configuration version '1342079262'\e[0m\n\e[0;36mnotice: Finished catalog run in 0.03 seconds\e[0m\n"}, :statuscode=>0, :statusmsg=>"OK", :sender=>"Server2.gemalto.com"}, @action="runonce", @agent="puppetd">] --- !ruby/object:MCollective::RPC::Result action: runonce agent: puppetd results: :data: :output: "\e[0;32minfo: Caching catalog for server2.gemalto.com\e[0m\n\ \e[0;32minfo: Applying configuration version '1342079262'\e[0m\n\ \e[0;36mnotice: Finished catalog run in 0.03 seconds\e[0m\n" :statuscode: 0 :statusmsg: OK :sender: Server2.gemalto.com test[proposal - 2] finished

and the yaml file on the seond node :


"File[/var/lib/puppet/facts]": !ruby/sym checked: 2012-07-12 09:51:45.268143 +02:00 !ruby/sym synced: 2012-07-12 09:51:45.137156 +02:00 "Class[Main]": !ruby/sym checked: 2012-07-12 10:07:00.142167 +02:00 "File[/var/lib/puppet/ssl/private_keys]": !ruby/sym checked: 2012-07-12 09:51:45.271698 +02:00 !ruby/sym synced: 2012-07-12 09:51:45.142976 +02:00 "File[/etc/puppet/puppet.conf]": !ruby/sym checked: 2012-07-12 09:51:46.763554 +02:00 !ruby/sym configuration: {} "File[/var/lib/puppet/ssl/certificate_requests]": !ruby/sym checked: 2012-07-12 09:51:45.279479 +02:00 !ruby/sym synced: 2012-07-12 09:51:45.159266 +02:00 "Stage[main]": !ruby/sym checked: 2012-07-12 10:07:00.143555 +02:00 "Class[Glance_e]": !ruby/sym checked: 2012-07-12 10:07:00.140399 +02:00 "File[/var/lib/puppet/client_data]": !ruby/sym checked: 2012-07-12 09:51:46.764492 +02:00 !ruby/sym synced: 2012-07-12 09:51:45.149191 +02:00 "File[/var/lib/puppet/ssl/private]": !ruby/sym checked: 2012-07-12 09:51:45.277418 +02:00 !ruby/sym synced: 2012-07-12 09:51:45.154989 +02:00 "File[/var/lib/puppet/clientbucket]": !ruby/sym checked: 2012-07-12 09:51:46.767397 +02:00 !ruby/sym synced: 2012-07-12 09:51:45.157621 +02:00 "File[/var/lib/puppet/client_yaml]": !ruby/sym checked: 2012-07-12 09:51:46.766446 +02:00 !ruby/sym synced: 2012-07-12 09:51:45.153167 +02:00 "File[/var/lib/puppet/lib]": !ruby/sym checked: 2012-07-12 09:51:45.278580 +02:00 !ruby/sym synced: 2012-07-12 09:51:45.156239 +02:00 "File[/var/log/puppet]": !ruby/sym checked: 2012-07-12 09:51:45.269008 +02:00 "File[/var/lib/puppet]": !ruby/sym checked: 2012-07-12 09:51:45.267087 +02:00 "File[/var/lib/puppet/state]": !ruby/sym checked: 2012-07-12 09:51:45.276414 +02:00 !ruby/sym synced: 2012-07-12 09:51:45.150469 +02:00 "File[/etc/puppet]": !ruby/sym checked: 2012-07-12 09:51:45.266200 +02:00 "File[/var/lib/puppet/ssl/public_keys]": !ruby/sym checked: 2012-07-12 09:51:45.272900 +02:00 !ruby/sym synced: 2012-07-12 09:51:45.144669 +02:00 "Class[Settings]": !ruby/sym checked: 2012-07-12 10:07:00.135979 +02:00 "File[/var/run/puppet]": !ruby/sym checked: 2012-07-12 09:51:45.275386 +02:00 !ruby/sym synced: 2012-07-12 09:51:45.147741 +02:00 "File[/var/lib/puppet/ssl]": !ruby/sym checked: 2012-07-12 09:51:45.270091 +02:00 !ruby/sym synced: 2012-07-12 09:51:45.140926 +02:00 "File[/var/lib/puppet/state/graphs]": !ruby/sym checked: 2012-07-12 09:51:46.765384 +02:00 !ruby/sym synced: 2012-07-12 09:51:45.151799 +02:00 "Filebucket[puppet]": !ruby/sym checked: 2012-07-12 10:07:00.137387 +02:00 "File[/var/lib/puppet/ssl/certs]": !ruby/sym checked: 2012-07-12 09:51:45.274179 +02:00 !ruby/sym synced: 2012-07-12 09:51:45.146368 +02:00

is there another step I need to follow to completely install glance like running a "puppet apply" or is it something else?

thanx a lot for your time, Narjisse

guanxiaohua2k6 commented 12 years ago

Hi,

Could you confirm puppet manifests on dodai-deploy server with the following command?

ls -la /etc/puppet/modules

Glance folder should be there. And also pls confirm the contents of /var/log/puppet/*.yml.

Xiaohua.

guanxiaohua2k6 commented 12 years ago

Glance modules are present and in the /var/log/puppet/ folder I can see 2 yml files named after my nodes.

Narjisse

guanxiaohua2k6 commented 12 years ago

Could you paste the contents of yml files in /var/log/puppet/ folder?

Xiaohua

guanxiaohua2k6 commented 12 years ago

/var/log/puppet/pupet_node1.example.com { "parameters": { "nova_api_fqdn": [ "node1.example.com" ], "nova_network_fqdn": [ "node1.example.com" ], "libvirt_type": "kvm", "dashboard": [ "127.0.1.1" ], "nova_network": [ "127.0.1.1" ], "admin_user": "admin", "self_host_fqdn": "node1.example.com", "admin_password": "admin", "nova_scheduler": [ "127.0.1.1" ], "nova_scheduler_fqdn": [ "node1.example.com" ], "self_host": "127.0.1.1", "nova_objectstore_fqdn": [ "node1.example.com" ], "admin_tenant_name": "admin", "nova_compute": [ "127.0.1.1" ], "mysql_fqdn": [ "node1.example.com" ], "nova_volume_fqdn": [ "node1.example.com" ], "glance": "node1.example.com", #this is because i reinstalled glance on the node1 to test nova but before this was set to node2 "nova_objectstore": [ "127.0.1.1" ], "dashboard_fqdn": [ "node1.example.com" ],

"nova_objectstore": [
  "127.0.1.1"
],
"dashboard_fqdn": [
  "node1.example.com"
],
"novnc": [
  "127.0.1.1"
],
"mysql": [
  "127.0.1.1"
],
"nova_cert_fqdn": [
  "node1.example.com"
],
"nova_cert": [
  "127.0.1.1"
],
"rabbitmq_fqdn": [
  "node1.example.com"
],
"nova_volume": [
  "127.0.1.1"
],
"rabbitmq": [
  "127.0.1.1"
],
"network_ip_range": "192.168.10.22/24",
"nova_compute_fqdn": [
  "node1.example.com"
],
"novnc_fqdn": [
  "node1.example.com"
],
"nova_api": [
  "127.0.1.1"
],
"proposal_id": "3",
"keystone": "node1.example.com"

}, "classes": [ "nova_e", "nova_e::nova_api::test" ] }

in puppet_node 2.example.com.yml

{ "classes": [ "nova_e" ], "parameters": { "proposal_id": "3", "nova_objectstore_fqdn": [ "node1.example.com" ], "rabbitmq": [ "127.0.1.1" ], "libvirt_type": "kvm", "nova_scheduler": [ "127.0.1.1" ], "nova_compute_fqdn": [ "node1.example.com" ], "nova_api": [ "node1.example.com_ip" ], "admin_tenant_name": "admin", "nova_cert_fqdn": [ "node1.example.com" ], "novnc_fqdn": [ "node1.example.com" ], "novnc": [ "127.0.1.1" ], "glance": "node2.example.com", "network_ip_range": "10.0.0.3/28", "nova_api_fqdn": [ "node2.example.com" ], "nova_compute": [ "127.0.1.1" ], "dashboard_fqdn": [ "node2.example.com" ], "nova_scheduler_fqdn": [ "node1.example.com" ], "nova_volume_fqdn": [ "node1.example.com" ], "dashboard": [ "node1.example.com_ip" ], "mysql": [ "node1.example.com_ip" ], "rabbitmq_fqdn": [ "node1.example.com" ], "keystone": "node1.example.com", "nova_network": [ "127.0.1.1" ], "self_host_fqdn": "node2.example.com", "nova_volume": [ "127.0.1.1" ], "mysql_fqdn": [ "node2.example.com" ], "admin_password": "admin", "self_host": "node1.example.com_ip", "nova_cert": [ "127.0.1.1" ], "nova_network_fqdn": [ "node1.example.com" ], "admin_user": "admin", "nova_objectstore": [ "127.0.1.1" ] } }

Narjisse

guanxiaohua2k6 commented 12 years ago

Hi,

I can't ensure whether it is the cause. Could you modify your /etc/hosts to change 127.0.1.1 to actual ip address? After that, try again.

BTW, the timezone here is UTC +9, so maybe I can't reply to you at your daytime.

Moreover, because I want to manage issues about dodai-deploy via github, I will copy the mail contents to github.

Xiaohua.

guanxiaohua2k6 commented 12 years ago

You were right, this configuration worked for me :

node 2:

127.0.0.1 localhost 127.0.1.1 node2.example.com node2 node1_ip node1.example.com node1 puppet node2_ip node2.example.com node2

node1: 127.0.0.1 localhost node1_ip node1.example.com node1 puppet puppet.example.com node2_ip node2.example.com node2

now I can install proposals on both nodes.

Thanx a lot for you help

Narjisse

guanxiaohua2k6 commented 12 years ago

up to now the install is as follow:

node1: keystone, nova-* (including nova-compute) node2: glance, nova-compute

my goal is too have a 2 nodes deployment of essex with 2 compute nodes and a swift implementation.

For now the install is good, nova, keystone and glance respond well on the corresponding servers but I can't access the instances in ssh nor VNC. this might be more of a network problem but I still wanted to check if the architecture I'm using is not causing this problem.

1-do you have any idea what might cause this network issue? 2-can I use your loopback swift implementation script to use a partition on the same disk let's say /dev/sda instead of /dev/sdb (I only have one accessible disk on my machines due to raid configuration) Thanx for your time and your patience.

Narjisse

guanxiaohua2k6 commented 12 years ago

hi,

As to problem 1, have you confirmed that the status of the instance became running? If it was, PLS access the instance from nova-network node. BTW, could you ping the instance? And what's the image you were using?

As to problem 2, it's ok to use /dev/sda.

Xiaohua.

guanxiaohua2k6 commented 12 years ago

As for the image i used an ubuntu precise official image on an instance, and the provided template "mybucket" on another and on both instances I cannot access network.

Narjisse

guanxiaohua2k6 commented 12 years ago

Could you ping instance?

Xiaohua

guanxiaohua2k6 commented 12 years ago

The vnc problem was due to firefox. On chrome I can access the instances from the web vnc, but I can't ping nor ssh to it.

Narjisse

guanxiaohua2k6 commented 12 years ago

Could you get the console of the instance? You can get it from openstack dashboard. Could you paste it?

Xiaohua

guanxiaohua2k6 commented 12 years ago

cloud-setup: after 30 fails, debugging cloud-setup: running debug (30 tries reached) ############ debug start ##############

/etc/rc.d/init.d/sshd start

stty: /dev/console startup dropbear [ [1;32mOK[0;39m ] route: fscanf

ifconfig -a

eth0 Link encap:Ethernet HWaddr FA:16:3E:69:7A:6B
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:18 errors:0 dropped:0 overruns:0 frame:0 TX packets:9 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:5040 (4.9 KiB) TX bytes:2190 (2.1 KiB)

lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0 UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)

route -n

Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface route: fscanf

cat /etc/resolv.conf

cat: can't open '/etc/resolv.conf': No such file or directory

gateway not found

/etc/rc.d/init.d/cloud-functions: line 41: /etc/resolv.conf: No such file or directory

pinging nameservers

uname -a

Linux ttylinux_host 2.6.35-22-virtual #35-Ubuntu SMP Sat Oct 16 23:19:29 UTC 2010 x86_64 GNU/Linux cloud-setup: after 30 fails, debugging cloud-setup: running debug (30 tries reached) ############ debug start ##############

/etc/rc.d/init.d/sshd start

stty: /dev/console startup dropbear [ [1;32mOK[0;39m ] route: fscanf

ifconfig -a

eth0 Link encap:Ethernet HWaddr FA:16:3E:69:7A:6B
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:18 errors:0 dropped:0 overruns:0 frame:0 TX packets:9 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:5040 (4.9 KiB) TX bytes:2190 (2.1 KiB)

lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0 UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)

route -n

Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface route: fscanf

cat /etc/resolv.conf

cat: can't open '/etc/resolv.conf': No such file or directory

gateway not found

/etc/rc.d/init.d/cloud-functions: line 41: /etc/resolv.conf: No such file or directory

pinging nameservers

uname -a

Linux ttylinux_host 2.6.35-22-virtual #35-Ubuntu SMP Sat Oct 16 23:19:29 UTC 2010 x86_64 GNU/Linux

lsmod

Module Size Used by ip_tables 18737 0 x_tables 24391 1 ip_tables pcnet32 36585 0 8139cp 20333 0 mii 5261 2 pcnet32,8139cp ne2k_pci 7802 0 8390 9897 1 ne2k_pci e1000 110274 0 acpiphp 18752 0

dmesg | tail

<6>[ 2.039440] acpiphp: Slot [29] registered <6>[ 2.039614] acpiphp: Slot [30] registered <6>[ 2.039775] acpiphp: Slot [31] registered <6>[ 2.109847] e1000: Intel(R) PRO/1000 Network Driver - version 7.3.21-k6-NAPI <6>[ 2.109875] e1000: Copyright (c) 1999-2006 Intel Corporation. <6>[ 2.163789] ne2k-pci.c:v1.03 9/22/2003 D. Becker/P. Gortmaker <6>[ 2.215734] 8139cp: 8139cp: 10/100 PCI Ethernet driver v1.3 (Mar 22, 2004) <6>[ 2.260330] pcnet32: pcnet32.c:v1.35 21.Apr.2008 tsbogend@alpha.franken.de <6>[ 2.391537] ip_tables: (C) 2000-2006 Netfilter Core Team <6>[ 5.524331] eth0: IPv6 duplicate address fe80::f816:3eff:fe69:7a6b detected! ### tail -n 25 /var/log/messages Jul 18 08:24:25 ttylinux_host user.info kernel: [ 2.037088] acpiphp: Slot [15] registered Jul 18 08:24:25 ttylinux_host user.info kernel: [ 2.037244] acpiphp: Slot [16] registered Jul 18 08:24:25 ttylinux_host user.info kernel: [ 2.037424] acpiphp: Slot [17] registered Jul 18 08:24:25 ttylinux_host user.info kernel: [ 2.037615] acpiphp: Slot [18] registered Jul 18 08:24:25 ttylinux_host user.info kernel: [ 2.037778] acpiphp: Slot [19] registered Jul 18 08:24:25 ttylinux_host user.info kernel: [ 2.037927] acpiphp: Slot [20] registered Jul 18 08:24:25 ttylinux_host user.info kernel: [ 2.038087] acpiphp: Slot [21] registered Jul 18 08:24:25 ttylinux_host user.info kernel: [ 2.038246] acpiphp: Slot [22] registered Jul 18 08:24:25 ttylinux_host user.info kernel: [ 2.038433] acpiphp: Slot [23] registered Jul 18 08:24:25 ttylinux_host user.info kernel: [ 2.038618] acpiphp: Slot [24] registered Jul 18 08:24:25 ttylinux_host user.info kernel: [ 2.038780] acpiphp: Slot [25] registered Jul 18 08:24:25 ttylinux_host user.info kernel: [ 2.038929] acpiphp: Slot [26] registered Jul 18 08:24:25 ttylinux_host user.info kernel: [ 2.039078] acpiphp: Slot [27] registered Jul 18 08:24:25 ttylinux_host user.info kernel: [ 2.039236] acpiphp: Slot [28] registered Jul 18 08:24:25 ttylinux_host user.info kernel: [ 2.039440] acpiphp: Slot [29] registered Jul 18 08:24:25 ttylinux_host user.info kernel: [ 2.039614] acpiphp: Slot [30] registered Jul 18 08:24:25 ttylinux_host user.info kernel: [ 2.039775] acpiphp: Slot [31] registered Jul 18 08:24:25 ttylinux_host user.info kernel: [ 2.109847] e1000: Intel(R) PRO/1000 Network Driver - version 7.3.21-k6-NAPI Jul 18 08:24:25 ttylinux_host user.info kernel: [ 2.109875] e1000: Copyright (c) 1999-2006 Intel Corporation. Jul 18 08:24:25 ttylinux_host user.info kernel: [ 2.163789] ne2k-pci.c:v1.03 9/22/2003 D. Becker/P. Gortmaker Jul 18 08:24:25 ttylinux_host user.info kernel: [ 2.215734] 8139cp: 8139cp: 10/100 PCI Ethernet driver v1.3 (Mar 22, 2004) Jul 18 08:24:25 ttylinux_host user.info kernel: [ 2.260330] pcnet32: pcnet32.c:v1.35 21.Apr.2008 tsbogend@alpha.franken.de Jul 18 08:24:25 ttylinux_host user.info kernel: [ 2.391537] ip_tables: (C) 2000-2006 Netfilter Core Team Jul 18 08:24:26 ttylinux_host user.info kernel: [ 5.524331] eth0: IPv6 duplicate address fe80::f816:3eff:fe69:7a6b detected! Jul 18 08:25:08 ttylinux_host authpriv.info dropbear[243]: Running in background ############ debug end ############## cloud-setup: failed to read iid from metadata. tried 30 stty: /dev/console [1;33msshd is already running.[0;39m stty: /dev/console startup inetd [ [1;32mOK[0;39m ] stty: /dev/console startup crond [ [1;32mOK[0;39m ] wget: can't connect to remote host (169.254.169.254): Network is unreachable cloud-userdata: failed to read instance id ===== cloud-final: system completely up in 49.45 seconds ==== wget: can't connect to remote host (169.254.169.254): Network is unreachable wget: can't connect to remote host (169.254.169.254): Network is unreachable wget: can't connect to remote host (169.254.169.254): Network is unreachable instance-id: public-ipv4: local-ipv4 : Narjisse
guanxiaohua2k6 commented 12 years ago

seems like the instance is not getting any ip address

Narjisse

guanxiaohua2k6 commented 12 years ago

Yes. Could you let me see your proposal in picture or text?

Xiaohua

guanxiaohua2k6 commented 12 years ago

the only change I made was ini nova-init.sh I removed the range size so i don't get an error at install here's the proposal:

Name nova essex sever Software openstack essex nova State tested Config items network_ip_range 192.168.22.0/29 libvirt_type qemu admin_tenant_name admin admin_user admin admin_password admin glance node2.example.com keystone node1.example.com Node configs node2.example.com nova_compute node1.example.com dashboard mysql nova_api nova_cert nova_compute nova_network nova_objectstore nova_scheduler nova_volume novnc rabbitmq Component configs dashboard /etc/apache2/conf.d/openstack-dashboard.conf WSGIScriptAlias / /usr/share/openstack-dashboard/openstack_dashboard/wsgi/django.wsgi WSGIDaemonProcess horizon user=www-data group=www-data processes=3 threads=10

<Directory /usr/share/openstack-dashboard/openstack_dashboard/wsgi> Order allow,deny Allow from all /etc/openstack-dashboard/local_settings.py import os

from django.utils.translation import ugettextlazy as

DEBUG = True TEMPLATE_DEBUG = DEBUG PROD = False USE_SSL = False

Note: You should change this value

SECRET_KEY = 'elj1IWiLoWHgcyYxFVLj7cM5rGOOxWl0'

Specify a regular expression to validate user passwords.

HORIZON_CONFIG = {

"password_validator": {

"regex": '.*',

"helptext": ("Your password does not meet the requirements.")

}

}

LOCAL_PATH = os.path.dirname(os.path.abspath(file))

We recommend you use memcached for development; otherwise after every reload

of the django development server, you will have to login again. To use

memcached set CACHE_BACKED to something like 'memcached://127.0.0.1:11211/'

CACHE_BACKEND = 'memcached://127.0.0.1:11211/'

Send email to the console by default

EMAIL_BACKEND = 'django.core.mail.backends.console.EmailBackend'

Or send them to /dev/null

EMAIL_BACKEND = 'django.core.mail.backends.dummy.EmailBackend'

Configure these for your outgoing email host

EMAIL_HOST = 'smtp.my-company.com'

EMAIL_PORT = 25

EMAIL_HOST_USER = 'djangomail'

EMAIL_HOST_PASSWORD = 'top-secret!'

For multiple regions uncomment this configuration, and add (endpoint, title).

AVAILABLE_REGIONS = [

('http://cluster1.example.com:5000/v2.0', 'cluster1'),

('http://cluster2.example.com:5000/v2.0', 'cluster2'),

]

OPENSTACK_HOST = "<%= keystone %>" OPENSTACK_KEYSTONE_URL = "http://%s:5000/v2.0" % OPENSTACK_HOST OPENSTACK_KEYSTONE_DEFAULT_ROLE = "Member"

The OPENSTACK_KEYSTONE_BACKEND settings can be used to identify the

capabilities of the auth backend for Keystone.

If Keystone has been configured to use LDAP as the auth backend then set

can_edit_user to False and name to 'ldap'.

#

TODO(tres): Remove these once Keystone has an API to identify auth backend.

OPENSTACK_KEYSTONE_BACKEND = { 'name': 'native', 'can_edit_user': True }

OPENSTACK_ENDPOINT_TYPE specifies the endpoint type to use for the endpoints

in the Keystone service catalog. Use this setting when Horizon is running

external to the OpenStack environment. The default is 'internalURL'.

OPENSTACK_ENDPOINT_TYPE = "publicURL"

The number of Swift containers and objects to display on a single page before

providing a paging element (a "more" link) to paginate results.

API_RESULT_LIMIT = 1000

If you have external monitoring links, eg:

EXTERNAL_MONITORING = [

['Nagios','http://foo.com'],

['Ganglia','http://bar.com'],

]

LOGGING = { 'version': 1,

When set to True this will disable all logging except

    # for loggers specified in this configuration dictionary. Note that
    # if nothing is specified here and disable_existing_loggers is True,
    # django.db.backends will still log unless it is disabled explicitly.
    'disable_existing_loggers': False,
    'handlers': {
        'null': {
            'level': 'DEBUG',
            'class': 'django.utils.log.NullHandler',
            },
        'console': {
            # Set the level to "DEBUG" for verbose output logging.
            'level': 'INFO',
            'class': 'logging.StreamHandler',
            },
        },
    'loggers': {
        # Logging from django.db.backends is VERY verbose, send to null
        # by default.
        'django.db.backends': {
            'handlers': ['null'],
            'propagate': False,
            },
        'horizon': {
            'handlers': ['console'],
            'propagate': False,
        },
        'novaclient': {
            'handlers': ['console'],
            'propagate': False,
        },
        'keystoneclient': {
            'handlers': ['console'],
            'propagate': False,
        },
        'nose.plugins.manager': {
            'handlers': ['console'],
            'propagate': False,
        }
    }

} nova_compute /etc/nova/nova-compute.conf --libvirt_type=<%= libvirt_type %> --vncserver_proxyclient_address=<%= self_host %> --vncserver_listen=<%= self_host %> Software configs /etc/nova/nova.conf --dhcpbridge_flagfile=/etc/nova/nova.conf --dhcpbridge=/usr/bin/nova-dhcpbridge --logdir=/var/log/nova --state_path=/var/lib/nova --lock_path=/var/lock/nova --allow_admin_api=true --use_deprecated_auth=false --auth_strategy=keystone --scheduler_driver=nova.scheduler.simple.SimpleScheduler --s3_host=<%= nova_objectstore %> --ec2_host=<%= nova_api %> --rabbit_host=<%= rabbitmq %> --cc_host=<%= nova_api %> --nova_url=http://<%= nova_api %>:8774/v1.1/ --routing_source_ip=<%= nova_api %> --glance_api_servers=<%= glance %>:9292 --image_service=nova.image.glance.GlanceImageService --iscsi_ip_prefix=192.168.22 --sql_connection=mysql://root:nova@<%= mysql %>/nova --ec2_url=http://<%= nova_api %>:8773/services/Cloud --keystone_ec2_url=http://<%= keystone %>:5000/v2.0/ec2tokens --api_paste_config=/etc/nova/api-paste.ini --libvirt_type=<%= libvirt_type %> --libvirt_use_virtio_for_bridges=true --start_guests_on_host_boot=true --resume_guests_state_on_host_boot=true --vnc_enabled=true --novncproxy_base_url=http://<%= nova_api %>:6080/vnc_auto.html

network specific settings

--network_manager=nova.network.manager.VlanManager --public_interface=eth0 --flat_interface=eth0 --vlan_interface=eth0 --flat_network_bridge=br100 --fixed_range=192.168.22.32/27 --floating_range=10.35.17.240/30 --network_size=8

--flat_network_dhcp_start=192.168.22.33

--flat_injected=False --force_dhcp_release --iscsi_helper=tgtadm --connection_type=libvirt --root_helper=sudo nova-rootwrap --verbose /etc/nova/api-paste.ini ############

Metadata

############ [composite:metadata] use = egg:Paste#urlmap /: metaversions /latest: meta /1.0: meta /2007-01-19: meta /2007-03-01: meta /2007-08-29: meta /2007-10-10: meta /2007-12-15: meta /2008-02-01: meta /2008-09-01: meta /2009-04-04: meta

[pipeline:metaversions] pipeline = ec2faultwrap logrequest metaverapp

[pipeline:meta] pipeline = ec2faultwrap logrequest metaapp

[app:metaverapp] paste.app_factory = nova.api.metadata.handler:Versions.factory

[app:metaapp] paste.app_factory = nova.api.metadata.handler:MetadataRequestHandler.factory

#######

EC2

#######

[composite:ec2] use = egg:Paste#urlmap /services/Cloud: ec2cloud

[composite:ec2cloud] use = call:nova.api.auth:pipeline_factory noauth = ec2faultwrap logrequest ec2noauth cloudrequest validator ec2executor deprecated = ec2faultwrap logrequest authenticate cloudrequest validator ec2executor keystone = ec2faultwrap logrequest ec2keystoneauth cloudrequest validator ec2executor

[filter:ec2faultwrap] paste.filter_factory = nova.api.ec2:FaultWrapper.factory

[filter:logrequest] paste.filter_factory = nova.api.ec2:RequestLogging.factory

[filter:ec2lockout] paste.filter_factory = nova.api.ec2:Lockout.factory

[filter:totoken] paste.filter_factory = nova.api.ec2:EC2Token.factory

[filter:ec2keystoneauth] paste.filter_factory = nova.api.ec2:EC2KeystoneAuth.factory

[filter:ec2noauth] paste.filter_factory = nova.api.ec2:NoAuth.factory

[filter:authenticate] paste.filter_factory = nova.api.ec2:Authenticate.factory

[filter:cloudrequest] controller = nova.api.ec2.cloud.CloudController paste.filter_factory = nova.api.ec2:Requestify.factory

[filter:authorizer] paste.filter_factory = nova.api.ec2:Authorizer.factory

[filter:validator] paste.filter_factory = nova.api.ec2:Validator.factory

[app:ec2executor] paste.app_factory = nova.api.ec2:Executor.factory

#############

Openstack

#############

[composite:osapi_compute] use = call:nova.api.openstack.urlmap:urlmap_factory /: oscomputeversions /v1.1: openstack_compute_api_v2 /v2: openstack_compute_api_v2

[composite:osapi_volume] use = call:nova.api.openstack.urlmap:urlmap_factory /: osvolumeversions /v1: openstack_volume_api_v1

[composite:openstack_compute_api_v2] use = call:nova.api.auth:pipeline_factory noauth = faultwrap noauth ratelimit osapi_compute_app_v2 deprecated = faultwrap auth ratelimit osapi_compute_app_v2 keystone = faultwrap authtoken keystonecontext ratelimit osapi_compute_app_v2 keystone_nolimit = faultwrap authtoken keystonecontext osapi_compute_app_v2

[composite:openstack_volume_api_v1] use = call:nova.api.auth:pipeline_factory noauth = faultwrap noauth ratelimit osapi_volume_app_v1 deprecated = faultwrap auth ratelimit osapi_volume_app_v1 keystone = faultwrap authtoken keystonecontext ratelimit osapi_volume_app_v1 keystone_nolimit = faultwrap authtoken keystonecontext osapi_volume_app_v1

[filter:faultwrap] paste.filter_factory = nova.api.openstack:FaultWrapper.factory

[filter:auth] paste.filter_factory = nova.api.openstack.auth:AuthMiddleware.factory

[filter:noauth] paste.filter_factory = nova.api.openstack.auth:NoAuthMiddleware.factory

[filter:ratelimit] paste.filter_factory = nova.api.openstack.compute.limits:RateLimitingMiddleware.factory

[app:osapi_compute_app_v2] paste.app_factory = nova.api.openstack.compute:APIRouter.factory

[pipeline:oscomputeversions] pipeline = faultwrap oscomputeversionapp

[app:osapi_volume_app_v1] paste.app_factory = nova.api.openstack.volume:APIRouter.factory

[app:oscomputeversionapp] paste.app_factory = nova.api.openstack.compute.versions:Versions.factory

[pipeline:osvolumeversions] pipeline = faultwrap osvolumeversionapp

[app:osvolumeversionapp] paste.app_factory = nova.api.openstack.volume.versions:Versions.factory

##########

Shared

##########

[filter:keystonecontext] paste.filter_factory = nova.api.auth:NovaKeystoneContext.factory

[filter:authtoken] paste.filter_factory = keystone.middleware.auth_token:filter_factory service_protocol = http service_host = <%= keystone %> service_port = 5000 auth_host = <%= keystone %> auth_port = 35357 auth_protocol = http auth_uri = http://<%= keystone %>:5000/ admin_tenant_name = <%= admin_tenant_name %> admin_user = <%= admin_user %> admin_password = <%= admin_password %>

Narjisse

guanxiaohua2k6 commented 12 years ago

Could you paste the full contents of console for the instance? The initial part was ignored.

Xiaohua

guanxiaohua2k6 commented 12 years ago

[ 0.000000] Initializing cgroup subsys cpuset [ 0.000000] Initializing cgroup subsys cpu [ 0.000000] Linux version 2.6.35-22-virtual (buildd@yellow) (gcc version 4.4.5 (Ubuntu/Linaro 4.4.4-14ubuntu5) ) #35-Ubuntu SMP Sat Oct 16 23:19:29 UTC 2010 (Ubuntu 2.6.35-22.35-virtual 2.6.35.4) [ 0.000000] Command line: root=/dev/vda console=ttyS0 [ 0.000000] BIOS-provided physical RAM map: [ 0.000000] BIOS-e820: 0000000000000000 - 000000000009bc00 (usable) [ 0.000000] BIOS-e820: 000000000009bc00 - 00000000000a0000 (reserved) [ 0.000000] BIOS-e820: 00000000000f0000 - 0000000000100000 (reserved) [ 0.000000] BIOS-e820: 0000000000100000 - 000000007fffd000 (usable) [ 0.000000] BIOS-e820: 000000007fffd000 - 0000000080000000 (reserved) [ 0.000000] BIOS-e820: 00000000fffc0000 - 0000000100000000 (reserved) [ 0.000000] NX (Execute Disable) protection: active [ 0.000000] DMI 2.4 present. [ 0.000000] No AGP bridge found [ 0.000000] last_pfn = 0x7fffd max_arch_pfn = 0x400000000 [ 0.000000] x86 PAT enabled: cpu 0, old 0x7040600070406, new 0x7010600070106 [ 0.000000] Scanning 1 areas for low memory corruption [ 0.000000] modified physical RAM map: [ 0.000000] modified: 0000000000000000 - 0000000000010000 (reserved) [ 0.000000] modified: 0000000000010000 - 000000000009bc00 (usable) [ 0.000000] modified: 000000000009bc00 - 00000000000a0000 (reserved) [ 0.000000] modified: 00000000000f0000 - 0000000000100000 (reserved) [ 0.000000] modified: 0000000000100000 - 000000007fffd000 (usable) [ 0.000000] modified: 000000007fffd000 - 0000000080000000 (reserved) [ 0.000000] modified: 00000000fffc0000 - 0000000100000000 (reserved) [ 0.000000] found SMP MP-table at [ffff8800000fdae0] fdae0 [ 0.000000] init_memory_mapping: 0000000000000000-000000007fffd000 [ 0.000000] RAMDISK: 7ffd8000 - 7fff0000 [ 0.000000] ACPI: RSDP 00000000000fd980 00014 (v00 BOCHS ) [ 0.000000] ACPI: RSDT 000000007fffd7b0 00034 (v01 BOCHS BXPCRSDT 00000001 BXPC 00000001) [ 0.000000] ACPI: FACP 000000007fffff80 00074 (v01 BOCHS BXPCFACP 00000001 BXPC 00000001) [ 0.000000] ACPI: DSDT 000000007fffd9b0 02589 (v01 BXPC BXDSDT 00000001 INTL 20100528) [ 0.000000] ACPI: FACS 000000007fffff40 00040 [ 0.000000] ACPI: SSDT 000000007fffd910 0009E (v01 BOCHS BXPCSSDT 00000001 BXPC 00000001) [ 0.000000] ACPI: APIC 000000007fffd830 00072 (v01 BOCHS BXPCAPIC 00000001 BXPC 00000001) [ 0.000000] ACPI: HPET 000000007fffd7f0 00038 (v01 BOCHS BXPCHPET 00000001 BXPC 00000001) [ 0.000000] No NUMA configuration found [ 0.000000] Faking a node at 0000000000000000-000000007fffd000 [ 0.000000] Initmem setup node 0 0000000000000000-000000007fffd000 [ 0.000000] NODE_DATA [0000000001d1b080 - 0000000001d2007f] [ 0.000000] Zone PFN ranges: [ 0.000000] DMA 0x00000010 -> 0x00001000 [ 0.000000] DMA32 0x00001000 -> 0x00100000 [ 0.000000] Normal empty [ 0.000000] Movable zone start PFN for each node [ 0.000000] early_node_map[2] active PFN ranges [ 0.000000] 0: 0x00000010 -> 0x0000009b [ 0.000000] 0: 0x00000100 -> 0x0007fffd [ 0.000000] ACPI: PM-Timer IO Port: 0xb008 [ 0.000000] ACPI: LAPIC (acpi_id[0x00] lapic_id[0x00] enabled) [ 0.000000] ACPI: IOAPIC (id[0x01] address[0xfec00000] gsi_base[0]) [ 0.000000] IOAPIC[0]: apic_id 1, version 17, address 0xfec00000, GSI 0-23 [ 0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) [ 0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) [ 0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) [ 0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) [ 0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) [ 0.000000] Using ACPI (MADT) for SMP configuration information [ 0.000000] ACPI: HPET id: 0x8086a201 base: 0xfed00000 [ 0.000000] SMP: Allowing 1 CPUs, 0 hotplug CPUs [ 0.000000] PM: Registered nosave memory: 000000000009b000 - 000000000009c000 [ 0.000000] PM: Registered nosave memory: 000000000009c000 - 00000000000a0000 [ 0.000000] PM: Registered nosave memory: 00000000000a0000 - 00000000000f0000 [ 0.000000] PM: Registered nosave memory: 00000000000f0000 - 0000000000100000 [ 0.000000] Allocating PCI resources starting at 80000000 (gap: 80000000:7ffc0000) [ 0.000000] Booting paravirtualized kernel on bare hardware [ 0.000000] setup_percpu: NR_CPUS:64 nr_cpumask_bits:64 nr_cpu_ids:1 nr_node_ids:1 [ 0.000000] PERCPU: Embedded 30 pages/cpu @ffff880001e00000 s91520 r8192 d23168 u2097152 [ 0.000000] pcpu-alloc: s91520 r8192 d23168 u2097152 alloc=12097152 [ 0.000000] pcpu-alloc: [0] 0 [ 0.000000] Built 1 zonelists in Node order, mobility grouping on. Total pages: 517000 [ 0.000000] Policy zone: DMA32 [ 0.000000] Kernel command line: root=/dev/vda console=ttyS0 [ 0.000000] PID hash table entries: 4096 (order: 3, 32768 bytes) [ 0.000000] Checking aperture... [ 0.000000] No AGP bridge found [ 0.000000] Subtract (41 early reservations) [ 0.000000] #1 [0001000000 - 0001d1a954] TEXT DATA BSS [ 0.000000] #2 [007ffd8000 - 007fff0000] RAMDISK [ 0.000000] #3 [0001d1b000 - 0001d1b071] BRK [ 0.000000] #4 [000009bc00 - 00000fdae0] BIOS reserved [ 0.000000] #5 [00000fdae0 - 00000fdaf0] MP-table mpf [ 0.000000] #6 [00000fdbe8 - 0000100000] BIOS reserved [ 0.000000] #7 [00000fdaf0 - 00000fdbe8] MP-table mpc [ 0.000000] #8 [0000010000 - 0000012000] TRAMPOLINE [ 0.000000] #9 [0000012000 - 0000016000] ACPI WAKEUP [ 0.000000] #10 [0000016000 - 0000018000] PGTABLE [ 0.000000] #11 [0001d1b080 - 0001d20080] NODE_DATA [ 0.000000] #12 [0001d20080 - 0001d21080] BOOTMEM [ 0.000000] #13 [0000018000 - 0000018180] BOOTMEM [ 0.000000] #14 [0002522000 - 0002523000] BOOTMEM [ 0.000000] #15 [0002523000 - 0002524000] BOOTMEM [ 0.000000] #16 [0002600000 - 0004200000] MEMMAP 0 [ 0.000000] #17 [0001d21080 - 0001d39080] BOOTMEM [ 0.000000] #18 [0001d39080 - 0001d51080] BOOTMEM [ 0.000000] #19 [0001d52000 - 0001d53000] BOOTMEM [ 0.000000] #20 [0001d1a980 - 0001d1a9c1] BOOTMEM [ 0.000000] #21 [0001d1aa00 - 0001d1aa43] BOOTMEM [ 0.000000] #22 [0001d1aa80 - 0001d1ac08] BOOTMEM [ 0.000000] #23 [0001d1ac40 - 0001d1aca8] BOOTMEM [ 0.000000] #24 [0001d1acc0 - 0001d1ad28] BOOTMEM [ 0.000000] #25 [0001d1ad40 - 0001d1ada8] BOOTMEM [ 0.000000] #26 [0001d1adc0 - 0001d1ae28] BOOTMEM [ 0.000000] #27 [0001d1ae40 - 0001d1aea8] BOOTMEM [ 0.000000] #28 [0001d1aec0 - 0001d1af28] BOOTMEM [ 0.000000] #29 [0001d1af40 - 0001d1af60] BOOTMEM [ 0.000000] #30 [0001d1af80 - 0001d1af9c] BOOTMEM [ 0.000000] #31 [0001d1afc0 - 0001d1afdc] BOOTMEM [ 0.000000] #32 [0001e00000 - 0001e1e000] BOOTMEM [ 0.000000] #33 [0001d51080 - 0001d51088] BOOTMEM [ 0.000000] #34 [0001d510c0 - 0001d510c8] BOOTMEM [ 0.000000] #35 [0001d51100 - 0001d51104] BOOTMEM [ 0.000000] #36 [0001d51140 - 0001d51148] BOOTMEM [ 0.000000] #37 [0001d51180 - 0001d512d0] BOOTMEM [ 0.000000] #38 [0001d51300 - 0001d51380] BOOTMEM [ 0.000000] #39 [0001d51380 - 0001d51400] BOOTMEM [ 0.000000] #40 [0001d53000 - 0001d5b000] BOOTMEM [ 0.000000] Memory: 2054064k/2097140k available (5816k kernel code, 468k absent, 42608k reserved, 5366k data, 828k init) [ 0.000000] SLUB: Genslabs=14, HWalign=64, Order=0-3, MinObjects=0, CPUs=1, Nodes=1 [ 0.000000] Hierarchical RCU implementation. [ 0.000000] RCU dyntick-idle grace-period acceleration is enabled. [ 0.000000] RCU-based detection of stalled CPUs is disabled. [ 0.000000] Verbose stalled-CPUs detection is disabled. [ 0.000000] NR_IRQS:4352 nr_irqs:256 [ 0.000000] Console: colour VGA+ 80x25 [ 0.000000] console [ttyS0] enabled [ 0.000000] allocated 20971520 bytes of page_cgroup [ 0.000000] please try 'cgroup_disable=memory' option if you don't want memory cgroups [ 0.000000] Fast TSC calibration using PIT [ 0.000000] Detected 3095.325 MHz processor. [ 0.000412] Calibrating delay loop (skipped), value calculated using timer frequency.. 6190.65 BogoMIPS (lpj=30953250) [ 0.000845] pid_max: default: 32768 minimum: 301 [ 0.001487] Security Framework initialized [ 0.002885] AppArmor: AppArmor initialized [ 0.003018] Yama: becoming mindful. [ 0.014872] Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes) [ 0.022687] Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes) [ 0.024678] Mount-cache hash table entries: 256 [ 0.031127] Initializing cgroup subsys ns [ 0.031417] Initializing cgroup subsys cpuacct [ 0.031613] Initializing cgroup subsys memory [ 0.032120] Initializing cgroup subsys devices [ 0.032310] Initializing cgroup subsys freezer [ 0.032471] Initializing cgroup subsys net_cls [ 0.033644] mce: CPU supports 10 MCE banks [ 0.034418] Performance Events: AMD PMU driver. [ 0.034828] ... version: 0 [ 0.034980] ... bit width: 48 [ 0.035121] ... generic registers: 4 [ 0.035292] ... value mask: 0000ffffffffffff [ 0.035452] ... max period: 00007fffffffffff [ 0.035619] ... fixed-purpose events: 0 [ 0.035744] ... event mask: 000000000000000f [ 0.036343] SMP alternatives: switching to UP code [ 0.177611] Freeing SMP alternatives: 24k freed [ 0.178327] ACPI: Core revision 20100428 [ 0.202928] ftrace: converting mcount calls to 0f 1f 44 00 00 [ 0.203156] ftrace: allocating 23035 entries in 91 pages [ 0.224733] Setting APIC routing to flat [ 0.226910] ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 [ 0.327548] CPU0: AMD QEMU Virtual CPU version 1.0 stepping 03 [ 0.330000] Brought up 1 CPUs [ 0.330000] Total of 1 processors activated (6190.65 BogoMIPS). [ 0.330000] devtmpfs: initialized [ 0.368652] regulator: core version 0.5 [ 0.369026] Time: 8:47:15 Date: 07/18/12 [ 0.369934] NET: Registered protocol family 16 [ 0.373786] ACPI: bus type pci registered [ 0.375198] PCI: Using configuration type 1 for base access [ 0.384489] bio: create slab at 0 [ 0.416619] ACPI: Interpreter enabled [ 0.416776] ACPI: (supports S0 S3 S4 S5) [ 0.417457] ACPI: Using IOAPIC for interrupt routing [ 0.458819] ACPI: No dock devices found. [ 0.459015] PCI: Ignoring host bridge windows from ACPI; if necessary, use "pci=use_crs" and report a bug [ 0.460603] ACPI: PCI Root Bridge [PCI0](domain 0000 [bus 00-ff]) [ 0.463762] pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI [ 0.464023] pci 0000:00:01.3: quirk: [io 0xb100-0xb10f] claimed by PIIX4 SMB [ 0.527947] ACPI: PCI Interrupt Link [LNKA](IRQs 5 10 11) [ 0.528991] ACPI: PCI Interrupt Link [LNKB](IRQs 5 10 11) [ 0.529821] ACPI: PCI Interrupt Link [LNKC](IRQs 5 10 11) [ 0.530714] ACPI: PCI Interrupt Link [LNKD](IRQs 5 10 11) [ 0.531541] ACPI: PCI Interrupt Link [LNKS](IRQs 9) 0 [ 0.532005] HEST: Table is not found! [ 0.534166] vgaarb: device added: PCI:0000:00:02.0,decodes=io+mem,owns=io+mem,locks=none [ 0.534462] vgaarb: loaded [ 0.536145] SCSI subsystem initialized [ 0.537342] usbcore: registered new interface driver usbfs [ 0.537722] usbcore: registered new interface driver hub [ 0.538154] usbcore: registered new device driver usb [ 0.539841] ACPI: WMI: Mapper loaded [ 0.540039] PCI: Using ACPI for IRQ routing [ 0.544095] NetLabel: Initializing [ 0.544226] NetLabel: domain hash size = 128 [ 0.544371] NetLabel: protocols = UNLABELED CIPSOv4 [ 0.545212] NetLabel: unlabeled traffic allowed by default [ 0.546056] Switching to clocksource tsc [ 0.615235] AppArmor: AppArmor Filesystem Enabled [ 0.615778] pnp: PnP ACPI init [ 0.616015] ACPI: bus type pnp registered [ 0.624584] pnp: PnP ACPI: found 8 devices [ 0.624776] ACPI: ACPI bus type pnp unregistered [ 0.643552] NET: Registered protocol family 2 [ 0.645367] IP route cache hash table entries: 65536 (order: 7, 524288 bytes) [ 0.651411] TCP established hash table entries: 262144 (order: 10, 4194304 bytes) [ 0.656450] TCP bind hash table entries: 65536 (order: 8, 1048576 bytes) [ 0.657766] TCP: Hash tables configured (established 262144 bind 65536) [ 0.658027] TCP reno registered [ 0.658224] UDP hash table entries: 1024 (order: 3, 32768 bytes) [ 0.658587] UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes) [ 0.659661] NET: Registered protocol family 1 [ 0.660019] pci 0000:00:00.0: Limiting direct PCI/PCI transfers [ 0.660265] pci 0000:00:01.0: PIIX3: Enabling Passive Release [ 0.660600] pci 0000:00:01.0: Activating ISA DMA hang workarounds [ 0.663226] Scanning for low memory corruption every 60 seconds [ 0.665516] audit: initializing netlink socket (disabled) [ 0.666070] type=2000 audit(1342601235.660:1): initialized [ 0.701407] Trying to unpack rootfs image as initramfs... [ 0.704679] rootfs image is not initramfs (junk in compressed archive); looks like an initrd [ 0.707404] Freeing initrd memory: 96k freed [ 0.709590] HugeTLB registered 2 MB page size, pre-allocated 0 pages [ 0.720406] VFS: Disk quotas dquot_6.5.2 [ 0.720917] Dquot-cache hash table entries: 512 (order 0, 4096 bytes) [ 0.726650] fuse init (API version 7.14) [ 0.727572] msgmni has been set to 4012 [ 0.733279] Block layer SCSI generic (bsg) driver version 0.4 loaded (major 253) [ 0.733747] io scheduler noop registered [ 0.733915] io scheduler deadline registered (default) [ 0.734367] io scheduler cfq registered [ 0.735450] pci_hotplug: PCI Hot Plug PCI Core version: 0.5 [ 0.736195] pciehp: PCI Express Hot Plug Controller Driver version: 0.4 [ 0.739199] input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0 [ 0.739783] ACPI: Power Button [PWRF] [ 0.758245] ERST: Table is not found! [ 0.760471] ACPI: PCI Interrupt Link [LNKC] enabled at IRQ 11 [ 0.760808] virtio-pci 0000:00:03.0: PCI INT A -> Link[LNKC] -> GSI 11 (level, high) -> IRQ 11 [ 0.762552] ACPI: PCI Interrupt Link [LNKD] enabled at IRQ 10 [ 0.762782] virtio-pci 0000:00:04.0: PCI INT A -> Link[LNKD] -> GSI 10 (level, high) -> IRQ 10 [ 0.764163] ACPI: PCI Interrupt Link [LNKA] enabled at IRQ 10 [ 0.764365] virtio-pci 0000:00:05.0: PCI INT A -> Link[LNKA] -> GSI 10 (level, high) -> IRQ 10 [ 0.765615] ACPI: PCI Interrupt Link [LNKB] enabled at IRQ 11 [ 0.765819] virtio-pci 0000:00:06.0: PCI INT A -> Link[LNKB] -> GSI 11 (level, high) -> IRQ 11 [ 0.767417] hpet_acpi_add: no address or irqs in _CRS [ 0.767683] Linux agpgart interface v0.103 [ 0.768083] Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled [ 0.769314] serial8250: ttyS0 at I/O 0x3f8 (irq = 4) is a 16550A [ 0.769947] serial8250: ttyS1 at I/O 0x2f8 (irq = 3) is a 16550A [ 0.772200] 00:05: ttyS0 at I/O 0x3f8 (irq = 4) is a 16550A [ 0.772851] 00:06: ttyS1 at I/O 0x2f8 (irq = 3) is a 16550A [ 0.782263] brd: module loaded [ 0.786463] loop: module loaded [ 0.790835] vda: unknown partition table [ 0.799364] vdb: unknown partition table [ 0.805512] scsi0 : ata_piix [ 0.806627] scsi1 : ata_piix [ 0.807128] ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc0e0 irq 14 [ 0.807371] ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc0e8 irq 15 [ 0.810475] Fixed MDIO Bus: probed [ 0.811020] PPP generic driver version 2.4.2 [ 0.811523] tun: Universal TUN/TAP device driver, 1.6 [ 0.811696] tun: (C) 1999-2004 Max Krasnyansky maxk@qualcomm.com [ 0.816094] ehci_hcd: USB 2.0 'Enhanced' Host Controller (EHCI) Driver [ 0.816454] ohci_hcd: USB 1.1 'Open' Host Controller (OHCI) Driver [ 0.816765] uhci_hcd: USB Universal Host Controller Interface driver [ 0.817337] uhci_hcd 0000:00:01.2: PCI INT D -> Link[LNKD] -> GSI 10 (level, high) -> IRQ 10 [ 0.817772] uhci_hcd 0000:00:01.2: UHCI Host Controller [ 0.818470] uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 [ 0.819062] uhci_hcd 0000:00:01.2: irq 10, io base 0x0000c080 [ 0.823724] hub 1-0:1.0: USB hub found [ 0.824139] hub 1-0:1.0: 2 ports detected [ 0.826195] PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 [ 0.828010] serio: i8042 KBD port at 0x60,0x64 irq 1 [ 0.828334] serio: i8042 AUX port at 0x60,0x64 irq 12 [ 0.829189] mice: PS/2 mouse device common for all mice [ 0.830169] rtc_cmos 00:01: RTC can wake from S4 [ 0.830867] rtc_cmos 00:01: rtc core: registered rtc_cmos as rtc0 [ 0.831336] rtc0: alarms up to one day, 114 bytes nvram [ 0.832224] device-mapper: uevent: version 1.0.3 [ 0.834768] input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1 [ 0.838779] device-mapper: ioctl: 4.17.0-ioctl (2010-03-05) initialised: dm-devel@redhat.com [ 0.839480] device-mapper: multipath: version 1.1.1 loaded [ 0.839759] device-mapper: multipath round-robin: version 1.0.0 loaded [ 0.841386] cpuidle: using governor ladder [ 0.841583] cpuidle: using governor menu [ 0.843981] TCP cubic registered [ 0.845018] NET: Registered protocol family 10 [ 0.849484] lo: Disabled Privacy Extensions [ 0.852406] NET: Registered protocol family 17 [ 0.852962] powernow-k8: Processor cpuid 623 not supported [ 0.855334] registered taskstats version 1 [ 0.856545] Magic number: 4:441:776 [ 0.857331] rtc_cmos 00:01: setting system clock to 2012-07-18 08:47:16 UTC (1342601236) [ 0.857637] BIOS EDD facility v0.16 2004-Jun-25, 0 devices found [ 0.857876] EDD information not available. [ 0.995454] md: Waiting for all devices to be available before autodetect [ 0.995699] md: If you don't use raid, use raid=noautodetect [ 0.997441] md: Autodetecting RAID arrays. [ 0.997600] md: Scanned 0 and added 0 devices. [ 0.997762] md: autorun ... [ 0.997900] md: ... autorun DONE. [ 0.999263] RAMDISK: Couldn't find valid RAM disk image starting at 0. [ 1.028570] VFS: Mounted root (ext2 filesystem) readonly on device 252:0. [ 1.049498] devtmpfs: mounted [ 1.050054] Freeing unused kernel memory: 828k freed [ 1.068300] Write protecting the kernel read-only data: 10240k [ 1.069752] Freeing unused kernel memory: 308k freed [ 1.071071] Freeing unused kernel memory: 1620k freed [ 1.143697] usb 1-1: new full speed USB device using uhci_hcd and address 2

init started: BusyBox v1.17.2 (2010-10-17 16:10:18 MST) stty: /dev/console

[1;31mttylinux 12.1[0;39m [1;34m > [1;36mhttp://ttylinux.org/[0;39m [1;34m > [1;37mhostname: ttylinux_host[0;39m

load Kernel Module: acpiphp [ [1;32mOK[0;39m ] load Kernel Module: e1000 [ [1;32mOK[0;39m ] load Kernel Module: ne2k-pci [ [1;32mOK[0;39m ] load Kernel Module: 8139cp [ [1;32mOK[0;39m ] load Kernel Module: pcnet32 [ [1;32mOK[0;39m ] load Kernel Module: mii [ [1;32mOK[0;39m ] load Kernel Module: ip_tables [ [1;32mOK[0;39m ] file systems checked [ [1;32mOK[0;39m ] mounting local file systems [ [1;32mOK[0;39m ] setting up system clock [utc] Wed Jul 18 08:47:17 UTC 2012 [ [1;32mOK[0;39m ] stty: /dev/console stty: /dev/console initializing random number generator [[1;33mWATING[0;39m][-11G[1;34m..[0;39m [ [1;32mOK[0;39m ] stty: /dev/console startup klogd [ [1;32mOK[0;39m ] startup syslogd [ [1;32mOK[0;39m ] stty: /dev/console stty: /dev/console bringing up loopback interface lo [ [1;32mOK[0;39m ] stty: /dev/console udhcpc (v1.17.2) started Sending discover... Sending discover... Sending discover... No lease, forking to background starting DHCP forEthernet interface eth0 [ [1;32mOK[0;39m ] cloud-setup: checking http://169.254.169.254/2009-04-04/meta-data/instance-id wget: can't connect to remote host (169.254.169.254): Network is unreachable cloud-setup: failed 1/30: up 14.49. request failed wget: can't connect to remote host (169.254.169.254): Network is unreachable cloud-setup: failed 2/30: up 15.58. request failed wget: can't connect to remote host (169.254.169.254): Network is unreachable cloud-setup: failed 3/30: up 16.66. request failed wget: can't connect to remote host (169.254.169.254): Network is unreachable cloud-setup: failed 4/30: up 17.75. request failed wget: can't connect to remote host (169.254.169.254): Network is unreachable cloud-setup: failed 5/30: up 18.83. request failed wget: can't connect to remote host (169.254.169.254): Network is unreachable cloud-setup: failed 6/30: up 19.92. request failed wget: can't connect to remote host (169.254.169.254): Network is unreachable cloud-setup: failed 7/30: up 21.01. request failed wget: can't connect to remote host (169.254.169.254): Network is unreachable cloud-setup: failed 8/30: up 22.10. request failed wget: can't connect to remote host (169.254.169.254): Network is unreachable cloud-setup: failed 9/30: up 23.20. request failed wget: can't connect to remote host (169.254.169.254): Network is unreachable cloud-setup: failed 10/30: up 24.29. request failed wget: can't connect to remote host (169.254.169.254): Network is unreachable cloud-setup: failed 11/30: up 25.38. request failed wget: can't connect to remote host (169.254.169.254): Network is unreachable cloud-setup: failed 12/30: up 26.47. request failed wget: can't connect to remote host (169.254.169.254): Network is unreachable cloud-setup: failed 13/30: up 27.57. request failed wget: can't connect to remote host (169.254.169.254): Network is unreachable cloud-setup: failed 14/30: up 28.66. request failed wget: can't connect to remote host (169.254.169.254): Network is unreachable cloud-setup: failed 15/30: up 29.76. request failed wget: can't connect to remote host (169.254.169.254): Network is unreachable cloud-setup: failed 16/30: up 30.85. request failed wget: can't connect to remote host (169.254.169.254): Network is unreachable cloud-setup: failed 17/30: up 31.95. request failed wget: can't connect to remote host (169.254.169.254): Network is unreachable cloud-setup: failed 18/30: up 33.05. request failed wget: can't connect to remote host (169.254.169.254): Network is unreachable cloud-setup: failed 19/30: up 34.15. request failed wget: can't connect to remote host (169.254.169.254): Network is unreachable cloud-setup: failed 20/30: up 35.25. request failed wget: can't connect to remote host (169.254.169.254): Network is unreachable cloud-setup: failed 21/30: up 36.35. request failed wget: can't connect to remote host (169.254.169.254): Network is unreachable cloud-setup: failed 22/30: up 37.45. request failed wget: can't connect to remote host (169.254.169.254): Network is unreachable cloud-setup: failed 23/30: up 38.56. request failed wget: can't connect to remote host (169.254.169.254): Network is unreachable cloud-setup: failed 24/30: up 39.66. request failed wget: can't connect to remote host (169.254.169.254): Network is unreachable cloud-setup: failed 25/30: up 40.77. request failed wget: can't connect to remote host (169.254.169.254): Network is unreachable cloud-setup: failed 26/30: up 41.88. request failed wget: can't connect to remote host (169.254.169.254): Network is unreachable cloud-setup: failed 27/30: up 42.99. request failed wget: can't connect to remote host (169.254.169.254): Network is unreachable cloud-setup: failed 28/30: up 44.10. request failed wget: can't connect to remote host (169.254.169.254): Network is unreachable cloud-setup: failed 29/30: up 45.20. request failed wget: can't connect to remote host (169.254.169.254): Network is unreachable cloud-setup: failed 30/30: up 46.32. request failed cloud-setup: after 30 fails, debugging cloud-setup: running debug (30 tries reached) ############ debug start ##############

/etc/rc.d/init.d/sshd start

stty: /dev/console startup dropbear [ [1;32mOK[0;39m ] route: fscanf

ifconfig -a

eth0 Link encap:Ethernet HWaddr FA:16:3E:69:7A:6B
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:15 errors:0 dropped:0 overruns:0 frame:0 TX packets:12 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:2342 (2.2 KiB) TX bytes:2164 (2.1 KiB)

lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0 UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)

route -n

Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface route: fscanf

cat /etc/resolv.conf

cat: can't open '/etc/resolv.conf': No such file or directory

gateway not found

/etc/rc.d/init.d/cloud-functions: line 41: /etc/resolv.conf: No such file or directory

pinging nameservers

uname -a

Linux ttylinux_host 2.6.35-22-virtual #35-Ubuntu SMP Sat Oct 16 23:19:29 UTC 2010 x86_64 GNU/Linux

lsmod

Module Size Used by ip_tables 18737 0 x_tables 24391 1 ip_tables pcnet32 36585 0 8139cp 20333 0 mii 5261 2 pcnet32,8139cp ne2k_pci 7802 0 8390 9897 1 ne2k_pci e1000 110274 0 acpiphp 18752 0

dmesg | tail

<6>[ 2.067158] acpiphp: Slot [29] registered <6>[ 2.067312] acpiphp: Slot [30] registered <6>[ 2.067488] acpiphp: Slot [31] registered <6>[ 2.129803] e1000: Intel(R) PRO/1000 Network Driver - version 7.3.21-k6-NAPI <6>[ 2.129832] e1000: Copyright (c) 1999-2006 Intel Corporation. <6>[ 2.183302] ne2k-pci.c:v1.03 9/22/2003 D. Becker/P. Gortmaker <6>[ 2.234001] 8139cp: 8139cp: 10/100 PCI Ethernet driver v1.3 (Mar 22, 2004) <6>[ 2.278105] pcnet32: pcnet32.c:v1.35 21.Apr.2008 tsbogend@alpha.franken.de <6>[ 2.403920] ip_tables: (C) 2000-2006 Netfilter Core Team <7>[ 15.930165] eth0: no IPv6 routers present ### tail -n 25 /var/log/messages Jul 18 08:47:19 ttylinux_host user.info kernel: [ 2.064850] acpiphp: Slot [15] registered Jul 18 08:47:19 ttylinux_host user.info kernel: [ 2.065001] acpiphp: Slot [16] registered Jul 18 08:47:19 ttylinux_host user.info kernel: [ 2.065174] acpiphp: Slot [17] registered Jul 18 08:47:19 ttylinux_host user.info kernel: [ 2.065329] acpiphp: Slot [18] registered Jul 18 08:47:19 ttylinux_host user.info kernel: [ 2.065505] acpiphp: Slot [19] registered Jul 18 08:47:19 ttylinux_host user.info kernel: [ 2.065658] acpiphp: Slot [20] registered Jul 18 08:47:19 ttylinux_host user.info kernel: [ 2.065812] acpiphp: Slot [21] registered Jul 18 08:47:19 ttylinux_host user.info kernel: [ 2.065965] acpiphp: Slot [22] registered Jul 18 08:47:19 ttylinux_host user.info kernel: [ 2.066146] acpiphp: Slot [23] registered Jul 18 08:47:19 ttylinux_host user.info kernel: [ 2.066315] acpiphp: Slot [24] registered Jul 18 08:47:19 ttylinux_host user.info kernel: [ 2.066503] acpiphp: Slot [25] registered Jul 18 08:47:19 ttylinux_host user.info kernel: [ 2.066677] acpiphp: Slot [26] registered Jul 18 08:47:19 ttylinux_host user.info kernel: [ 2.066831] acpiphp: Slot [27] registered Jul 18 08:47:19 ttylinux_host user.info kernel: [ 2.066984] acpiphp: Slot [28] registered Jul 18 08:47:19 ttylinux_host user.info kernel: [ 2.067158] acpiphp: Slot [29] registered Jul 18 08:47:19 ttylinux_host user.info kernel: [ 2.067312] acpiphp: Slot [30] registered Jul 18 08:47:19 ttylinux_host user.info kernel: [ 2.067488] acpiphp: Slot [31] registered Jul 18 08:47:19 ttylinux_host user.info kernel: [ 2.129803] e1000: Intel(R) PRO/1000 Network Driver - version 7.3.21-k6-NAPI Jul 18 08:47:19 ttylinux_host user.info kernel: [ 2.129832] e1000: Copyright (c) 1999-2006 Intel Corporation. Jul 18 08:47:19 ttylinux_host user.info kernel: [ 2.183302] ne2k-pci.c:v1.03 9/22/2003 D. Becker/P. Gortmaker Jul 18 08:47:19 ttylinux_host user.info kernel: [ 2.234001] 8139cp: 8139cp: 10/100 PCI Ethernet driver v1.3 (Mar 22, 2004) Jul 18 08:47:19 ttylinux_host user.info kernel: [ 2.278105] pcnet32: pcnet32.c:v1.35 21.Apr.2008 tsbogend@alpha.franken.de Jul 18 08:47:19 ttylinux_host user.info kernel: [ 2.403920] ip_tables: (C) 2000-2006 Netfilter Core Team Jul 18 08:47:31 ttylinux_host user.debug kernel: [ 15.930165] eth0: no IPv6 routers present Jul 18 08:48:03 ttylinux_host authpriv.info dropbear[251]: Running in background ############ debug end ############## cloud-setup: failed to read iid from metadata. tried 30 stty: /dev/console [1;33msshd is already running.[0;39m stty: /dev/console startup inetd [ [1;32mOK[0;39m ] stty: /dev/console startup crond [ [1;32mOK[0;39m ] wget: can't connect to remote host (169.254.169.254): Network is unreachable cloud-userdata: failed to read instance id ===== cloud-final: system completely up in 49.86 seconds ==== wget: can't connect to remote host (169.254.169.254): Network is unreachable wget: can't connect to remote host (169.254.169.254): Network is unreachable wget: can't connect to remote host (169.254.169.254): Network is unreachable instance-id: public-ipv4: local-ipv4 : Narjisse
guanxiaohua2k6 commented 12 years ago

OK. There is the message below.

udhcpc (v1.17.2) started Sending discover... Sending discover... Sending discover... No lease, forking to background

So the instance didn't get private IP from dhcp server on nova-network node. Firstly could you confirm the process of dnsmasq on nova-network node? Then paste the output of command ifconfig of two nodes.

Xiaohua

guanxiaohua2k6 commented 12 years ago

maybe that's the problem:

service dnsmasq status

Narjisse

guanxiaohua2k6 commented 12 years ago

you can confirm the process dnsmasq with command "ps aux | grep dns".

Xiaohua

guanxiaohua2k6 commented 12 years ago

here's the output:

ps aux | grep dns root 17593 0.0 0.0 9376 940 pts/13 S+ 11:48 0:00 grep --color=auto dns nobody 27696 0.0 0.0 28812 1088 ? S Jul16 0:08 /usr/sbin/dnsmasq --strict-order --bind-interfaces --conf-file= --domain=novalocal --pid-file=/var/lib/nova/networks/nova-br100.pid --listen-address=10.35.17.225 --except-interface=lo --dhcp-range=10.35.17.227,static,120s --dhcp-lease-max=8 --dhcp-hostsfile=/var/lib/nova/networks/nova-br100.conf --dhcp-script=/usr/bin/nova-dhcpbridge --leasefile-ro root 27697 0.0 0.0 28784 436 ? S Jul16 0:00 /usr/sbin/dnsmasq --strict-order --bind-interfaces --conf-file= --domain=novalocal --pid-file=/var/lib/nova/networks/nova-br100.pid --listen-address=10.35.17.225 --except-interface=lo --dhcp-range=10.35.17.227,static,120s --dhcp-lease-max=8 --dhcp-hostsfile=/var/lib/nova/networks/nova-br100.conf --dhcp-script=/usr/bin/nova-dhcpbridge --leasefile-ro

Narjisse

guanxiaohua2k6 commented 12 years ago

Maybe you can kill all the dnsmasq process, then restart nova-network, and then start a new instance.

Xiaohua

guanxiaohua2k6 commented 12 years ago

now instance launch fails with network error ProcessExecutionError: Unexpected error while running command. 2012-07-18 12:11:29 TRACE nova.rpc.amqp Command: sudo nova-rootwrap FLAGFILE=/etc/nova/nova.conf NETWORK_ID=3 dnsmasq --strict-order --bind-interfaces --conf-file= --domain=novalocal --pid-file=/var/lib/nova/networks/nova-br100.pid --listen-address=192.168.22.1 --except-interface=lo --dhcp-range=192.168.22.3,static,120s --dhcp-lease-max=8 --dhcp-hostsfile=/var/lib/nova/networks/nova-br100.conf --dhcp-script=/usr/bin/nova-dhcpbridge --leasefile-ro 2012-07-18 12:11:29 TRACE nova.rpc.amqp Exit code: 2 2012-07-18 12:11:29 TRACE nova.rpc.amqp Stdout: ''

and i can't add any other network

nova-manage network create --fixed_range_v4=172.168.22.0/28 --label=my_network Subnet(s) too large, defaulting to /29. To override, specify network_size flag. 2012-07-18 12:18:26 DEBUG nova.utils [req-e1f26343-449e-41ea-b0f3-324973f70703 None None] backend <module 'nova.db.sqlalchemy.api' from '/usr/lib/python2.7/dist-packages/nova/db/sqlalchemy/api.pyc'> from (pid=5765) __get_backend /usr/lib/python2.7/dist-packages/nova/utils.py:658 Command failed, please check log for more info 2012-07-18 12:18:26 CRITICAL nova [req-e1f26343-449e-41ea-b0f3-324973f70703 None None] Detected existing vlan with id 100 2012-07-18 12:18:26 TRACE nova Traceback (most recent call last): 2012-07-18 12:18:26 TRACE nova File "/usr/bin/nova-manage", line 1746, in 2012-07-18 12:18:26 TRACE nova main() 2012-07-18 12:18:26 TRACE nova File "/usr/bin/nova-manage", line 1733, in main 2012-07-18 12:18:26 TRACE nova fn(_fn_args, _fn_kwargs) 2012-07-18 12:18:26 TRACE nova File "/usr/bin/nova-manage", line 812, in create 2012-07-18 12:18:26 TRACE nova fixed_cidr=fixed_cidr) 2012-07-18 12:18:26 TRACE nova File "/usr/lib/python2.7/dist-packages/nova/network/manager.py", line 1838, in create_networks 2012-07-18 12:18:26 TRACE nova NetworkManager.create_networks(self, context, vpn=True, _kwargs) 2012-07-18 12:18:26 TRACE nova File "/usr/lib/python2.7/dist-packages/nova/network/manager.py", line 1400, in create_networks 2012-07-18 12:18:26 TRACE nova network = self.db.network_create_safe(context, net) 2012-07-18 12:18:26 TRACE nova File "/usr/lib/python2.7/dist-packages/nova/db/api.py", line 731, in network_create_safe 2012-07-18 12:18:26 TRACE nova return IMPL.network_create_safe(context, values) 2012-07-18 12:18:26 TRACE nova File "/usr/lib/python2.7/dist-packages/nova/db/sqlalchemy/api.py", line 102, in wrapper 2012-07-18 12:18:26 TRACE nova return f(_args, **kwargs) 2012-07-18 12:18:26 TRACE nova File "/usr/lib/python2.7/dist-packages/nova/db/sqlalchemy/api.py", line 1884, in network_create_safe 2012-07-18 12:18:26 TRACE nova raise exception.DuplicateVlan(vlan=values['vlan']) 2012-07-18 12:18:26 TRACE nova DuplicateVlan: Detected existing vlan with id 100 2012-07-18 12:18:26 TRACE nova

I get the same Vlan id on the error for every value I try to add

Narjisse

guanxiaohua2k6 commented 12 years ago

OK. I can't find the reason. Could you do all-in-one firstly, then try multiple ones?

Xiaohua

guanxiaohua2k6 commented 12 years ago

I already have an all in one openstack install (from scratch no dodai-deploy and no puppet), its runing ok. I want to get over networking issues between multiple compute nodes.

Anyway I'm gonna try the all in one install as you advised just to see if everything works fine do you recommend any specific cleaning after using the teardown.sh script?

Narjisse

guanxiaohua2k6 commented 12 years ago

OK, if you have done all-in-one, you can tried to uninstall nova and glance, then reinstall them.

Xiaohua

guanxiaohua2k6 commented 12 years ago

After uninstalling the nova proposal, changing the range, reinstalling it, testing, adding a floating range and creating a new instance from the "mybucket" template I can finally acces the network correctly in ssh and ping even with instances on the sencond node.

thanx a lot for your help and for this great tool.

Narjisse

guanxiaohua2k6 commented 12 years ago

You are welcome.

Xiaohua

sepulworld commented 12 years ago

Hi Xiaohua,

I am running a 2 node setup. Should nova-network be running on just 1 server? I have an issue where all VMs going to the 2nd nova-compute node aren't receiving their static or floating IPs after booting up. VNC to them works fine, it is just the IP leasing that appears not to be working.

Binary Host Zone Status State Updated_At nova-compute openstack-1 nova enabled :-) 2012-08-22 17:31:07 nova-compute openstack-2 nova enabled :-) 2012-08-22 17:31:14 nova-consoleauth openstack-1 nova enabled :-) 2012-08-22 17:31:06 nova-cert openstack-1 nova enabled :-) 2012-08-22 17:31:07 nova-volume openstack-1 nova enabled :-) 2012-08-22 17:31:07 nova-network openstack-1 nova enabled :-) 2012-08-22 17:31:07 nova-scheduler openstack-1 nova enabled :-) 2012-08-22 17:31:07

Dnsmasq on openstack-1:

dnsmasq 1514 0.0 0.0 28812 960 ? S 10:02 0:00 /usr/sbin/dnsmasq -x /var/run/dnsmasq/dnsmasq.pid -u dnsmasq -r /var/run/dnsmasq/resolv.conf -7 /etc/dnsmasq.d,.dpkg-dist,.dpkg-old,.dpkg-new 113 1772 0.0 0.0 25964 948 ? S 10:02 0:00 /usr/sbin/dnsmasq -u libvirt-dnsmasq --strict-order --bind-interfaces --pid-file=/var/run/libvirt/network/default.pid --conf-file= --except-interface lo --listen-address 192.168.122.1 --dhcp-range 192.168.122.2,192.168.122.254 --dhcp-leasefile=/var/lib/libvirt/dnsmasq/default.leases --dhcp-lease-max=253 --dhcp-no-override nobody 2344 0.0 0.0 28812 992 ? S 10:02 0:00 /usr/sbin/dnsmasq --strict-order --bind-interfaces --conf-file= --domain=novalocal --pid-file=/var/lib/nova/networks/nova-br100.pid --listen-address=192.168.101.1 --except-interface=lo --dhcp-range=192.168.101.3,static,120s --dhcp-lease-max=256 --dhcp-hostsfile=/var/lib/nova/networks/nova-br100.conf --dhcp-script=/usr/bin/nova-dhcpbridge --leasefile-ro root 2345 0.0 0.0 28784 452 ? S 10:02 0:00 /usr/sbin/dnsmasq --strict-order --bind-interfaces --conf-file= --domain=novalocal --pid-file=/var/lib/nova/networks/nova-br100.pid --listen-address=192.168.101.1 --except-interface=lo --dhcp-range=192.168.101.3,static,120s --dhcp-lease-max=256 --dhcp-hostsfile=/var/lib/nova/networks/nova-br100.conf --dhcp-script=/usr/bin/nova-dhcpbridge --leasefile-ro

and dnsmasq on openstack-2:

root@openstack-2:~# ps aux | grep dns 110 3278 0.0 0.0 25964 952 ? S Aug20 0:01 /usr/sbin/dnsmasq -u libvirt-dnsmasq --strict-order --bind-interfaces --pid-file=/var/run/libvirt/network/default.pid --conf-file= --except-interface lo --listen-address 192.168.122.1 --dhcp-range 192.168.122.2,192.168.122.254 --dhcp-leasefile=/var/lib/libvirt/dnsmasq/default.leases --dhcp-lease-max=253 --dhcp-no-override

Not sure where this 192.168.122.1 network is coming from. I have 192.168.101.0/24 setup in my proposal.

Any suggestions would be appreciated.

sepulworld commented 12 years ago

Hi Xiaohua,

Nevermind - figured it out. The switch uplinks needed to have the nova-network vlan set. In this case, vlan 100 needed to be tagged.