Closed guanxiaohua2k6 closed 12 years ago
Hi,
Could you confirm puppet manifests on dodai-deploy server with the following command?
ls -la /etc/puppet/modules
Glance folder should be there. And also pls confirm the contents of /var/log/puppet/*.yml.
Xiaohua.
Glance modules are present and in the /var/log/puppet/ folder I can see 2 yml files named after my nodes.
Narjisse
Could you paste the contents of yml files in /var/log/puppet/ folder?
Xiaohua
/var/log/puppet/pupet_node1.example.com { "parameters": { "nova_api_fqdn": [ "node1.example.com" ], "nova_network_fqdn": [ "node1.example.com" ], "libvirt_type": "kvm", "dashboard": [ "127.0.1.1" ], "nova_network": [ "127.0.1.1" ], "admin_user": "admin", "self_host_fqdn": "node1.example.com", "admin_password": "admin", "nova_scheduler": [ "127.0.1.1" ], "nova_scheduler_fqdn": [ "node1.example.com" ], "self_host": "127.0.1.1", "nova_objectstore_fqdn": [ "node1.example.com" ], "admin_tenant_name": "admin", "nova_compute": [ "127.0.1.1" ], "mysql_fqdn": [ "node1.example.com" ], "nova_volume_fqdn": [ "node1.example.com" ], "glance": "node1.example.com", #this is because i reinstalled glance on the node1 to test nova but before this was set to node2 "nova_objectstore": [ "127.0.1.1" ], "dashboard_fqdn": [ "node1.example.com" ],
"nova_objectstore": [
"127.0.1.1"
],
"dashboard_fqdn": [
"node1.example.com"
],
"novnc": [
"127.0.1.1"
],
"mysql": [
"127.0.1.1"
],
"nova_cert_fqdn": [
"node1.example.com"
],
"nova_cert": [
"127.0.1.1"
],
"rabbitmq_fqdn": [
"node1.example.com"
],
"nova_volume": [
"127.0.1.1"
],
"rabbitmq": [
"127.0.1.1"
],
"network_ip_range": "192.168.10.22/24",
"nova_compute_fqdn": [
"node1.example.com"
],
"novnc_fqdn": [
"node1.example.com"
],
"nova_api": [
"127.0.1.1"
],
"proposal_id": "3",
"keystone": "node1.example.com"
}, "classes": [ "nova_e", "nova_e::nova_api::test" ] }
in puppet_node 2.example.com.yml
{ "classes": [ "nova_e" ], "parameters": { "proposal_id": "3", "nova_objectstore_fqdn": [ "node1.example.com" ], "rabbitmq": [ "127.0.1.1" ], "libvirt_type": "kvm", "nova_scheduler": [ "127.0.1.1" ], "nova_compute_fqdn": [ "node1.example.com" ], "nova_api": [ "node1.example.com_ip" ], "admin_tenant_name": "admin", "nova_cert_fqdn": [ "node1.example.com" ], "novnc_fqdn": [ "node1.example.com" ], "novnc": [ "127.0.1.1" ], "glance": "node2.example.com", "network_ip_range": "10.0.0.3/28", "nova_api_fqdn": [ "node2.example.com" ], "nova_compute": [ "127.0.1.1" ], "dashboard_fqdn": [ "node2.example.com" ], "nova_scheduler_fqdn": [ "node1.example.com" ], "nova_volume_fqdn": [ "node1.example.com" ], "dashboard": [ "node1.example.com_ip" ], "mysql": [ "node1.example.com_ip" ], "rabbitmq_fqdn": [ "node1.example.com" ], "keystone": "node1.example.com", "nova_network": [ "127.0.1.1" ], "self_host_fqdn": "node2.example.com", "nova_volume": [ "127.0.1.1" ], "mysql_fqdn": [ "node2.example.com" ], "admin_password": "admin", "self_host": "node1.example.com_ip", "nova_cert": [ "127.0.1.1" ], "nova_network_fqdn": [ "node1.example.com" ], "admin_user": "admin", "nova_objectstore": [ "127.0.1.1" ] } }
Narjisse
Hi,
I can't ensure whether it is the cause. Could you modify your /etc/hosts to change 127.0.1.1 to actual ip address? After that, try again.
BTW, the timezone here is UTC +9, so maybe I can't reply to you at your daytime.
Moreover, because I want to manage issues about dodai-deploy via github, I will copy the mail contents to github.
Xiaohua.
You were right, this configuration worked for me :
node 2:
127.0.0.1 localhost 127.0.1.1 node2.example.com node2 node1_ip node1.example.com node1 puppet node2_ip node2.example.com node2
node1: 127.0.0.1 localhost node1_ip node1.example.com node1 puppet puppet.example.com node2_ip node2.example.com node2
now I can install proposals on both nodes.
Thanx a lot for you help
Narjisse
up to now the install is as follow:
node1: keystone, nova-* (including nova-compute) node2: glance, nova-compute
my goal is too have a 2 nodes deployment of essex with 2 compute nodes and a swift implementation.
For now the install is good, nova, keystone and glance respond well on the corresponding servers but I can't access the instances in ssh nor VNC. this might be more of a network problem but I still wanted to check if the architecture I'm using is not causing this problem.
1-do you have any idea what might cause this network issue? 2-can I use your loopback swift implementation script to use a partition on the same disk let's say /dev/sda instead of /dev/sdb (I only have one accessible disk on my machines due to raid configuration) Thanx for your time and your patience.
Narjisse
hi,
As to problem 1, have you confirmed that the status of the instance became running? If it was, PLS access the instance from nova-network node. BTW, could you ping the instance? And what's the image you were using?
As to problem 2, it's ok to use /dev/sda.
Xiaohua.
As for the image i used an ubuntu precise official image on an instance, and the provided template "mybucket" on another and on both instances I cannot access network.
Narjisse
Could you ping instance?
Xiaohua
The vnc problem was due to firefox. On chrome I can access the instances from the web vnc, but I can't ping nor ssh to it.
Narjisse
Could you get the console of the instance? You can get it from openstack dashboard. Could you paste it?
Xiaohua
cloud-setup: after 30 fails, debugging cloud-setup: running debug (30 tries reached) ############ debug start ##############
stty: /dev/console startup dropbear [ [1;32mOK[0;39m ] route: fscanf
eth0 Link encap:Ethernet HWaddr FA:16:3E:69:7A:6B
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:18 errors:0 dropped:0 overruns:0 frame:0
TX packets:9 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:5040 (4.9 KiB) TX bytes:2190 (2.1 KiB)
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
UP LOOPBACK RUNNING MTU:16436 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface route: fscanf
cat: can't open '/etc/resolv.conf': No such file or directory
/etc/rc.d/init.d/cloud-functions: line 41: /etc/resolv.conf: No such file or directory
Linux ttylinux_host 2.6.35-22-virtual #35-Ubuntu SMP Sat Oct 16 23:19:29 UTC 2010 x86_64 GNU/Linux cloud-setup: after 30 fails, debugging cloud-setup: running debug (30 tries reached) ############ debug start ##############
stty: /dev/console startup dropbear [ [1;32mOK[0;39m ] route: fscanf
eth0 Link encap:Ethernet HWaddr FA:16:3E:69:7A:6B
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:18 errors:0 dropped:0 overruns:0 frame:0
TX packets:9 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:5040 (4.9 KiB) TX bytes:2190 (2.1 KiB)
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
UP LOOPBACK RUNNING MTU:16436 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface route: fscanf
cat: can't open '/etc/resolv.conf': No such file or directory
/etc/rc.d/init.d/cloud-functions: line 41: /etc/resolv.conf: No such file or directory
Linux ttylinux_host 2.6.35-22-virtual #35-Ubuntu SMP Sat Oct 16 23:19:29 UTC 2010 x86_64 GNU/Linux
Module Size Used by ip_tables 18737 0 x_tables 24391 1 ip_tables pcnet32 36585 0 8139cp 20333 0 mii 5261 2 pcnet32,8139cp ne2k_pci 7802 0 8390 9897 1 ne2k_pci e1000 110274 0 acpiphp 18752 0
seems like the instance is not getting any ip address
Narjisse
Yes. Could you let me see your proposal in picture or text?
Xiaohua
the only change I made was ini nova-init.sh I removed the range size so i don't get an error at install here's the proposal:
Name nova essex sever Software openstack essex nova State tested Config items network_ip_range 192.168.22.0/29 libvirt_type qemu admin_tenant_name admin admin_user admin admin_password admin glance node2.example.com keystone node1.example.com Node configs node2.example.com nova_compute node1.example.com dashboard mysql nova_api nova_cert nova_compute nova_network nova_objectstore nova_scheduler nova_volume novnc rabbitmq Component configs dashboard /etc/apache2/conf.d/openstack-dashboard.conf WSGIScriptAlias / /usr/share/openstack-dashboard/openstack_dashboard/wsgi/django.wsgi WSGIDaemonProcess horizon user=www-data group=www-data processes=3 threads=10
<Directory /usr/share/openstack-dashboard/openstack_dashboard/wsgi> Order allow,deny Allow from all /etc/openstack-dashboard/local_settings.py import os
from django.utils.translation import ugettextlazy as
DEBUG = True TEMPLATE_DEBUG = DEBUG PROD = False USE_SSL = False
SECRET_KEY = 'elj1IWiLoWHgcyYxFVLj7cM5rGOOxWl0'
LOCAL_PATH = os.path.dirname(os.path.abspath(file))
CACHE_BACKEND = 'memcached://127.0.0.1:11211/'
EMAIL_BACKEND = 'django.core.mail.backends.console.EmailBackend'
OPENSTACK_HOST = "<%= keystone %>" OPENSTACK_KEYSTONE_URL = "http://%s:5000/v2.0" % OPENSTACK_HOST OPENSTACK_KEYSTONE_DEFAULT_ROLE = "Member"
#
OPENSTACK_KEYSTONE_BACKEND = { 'name': 'native', 'can_edit_user': True }
API_RESULT_LIMIT = 1000
LOGGING = { 'version': 1,
# for loggers specified in this configuration dictionary. Note that
# if nothing is specified here and disable_existing_loggers is True,
# django.db.backends will still log unless it is disabled explicitly.
'disable_existing_loggers': False,
'handlers': {
'null': {
'level': 'DEBUG',
'class': 'django.utils.log.NullHandler',
},
'console': {
# Set the level to "DEBUG" for verbose output logging.
'level': 'INFO',
'class': 'logging.StreamHandler',
},
},
'loggers': {
# Logging from django.db.backends is VERY verbose, send to null
# by default.
'django.db.backends': {
'handlers': ['null'],
'propagate': False,
},
'horizon': {
'handlers': ['console'],
'propagate': False,
},
'novaclient': {
'handlers': ['console'],
'propagate': False,
},
'keystoneclient': {
'handlers': ['console'],
'propagate': False,
},
'nose.plugins.manager': {
'handlers': ['console'],
'propagate': False,
}
}
} nova_compute /etc/nova/nova-compute.conf --libvirt_type=<%= libvirt_type %> --vncserver_proxyclient_address=<%= self_host %> --vncserver_listen=<%= self_host %> Software configs /etc/nova/nova.conf --dhcpbridge_flagfile=/etc/nova/nova.conf --dhcpbridge=/usr/bin/nova-dhcpbridge --logdir=/var/log/nova --state_path=/var/lib/nova --lock_path=/var/lock/nova --allow_admin_api=true --use_deprecated_auth=false --auth_strategy=keystone --scheduler_driver=nova.scheduler.simple.SimpleScheduler --s3_host=<%= nova_objectstore %> --ec2_host=<%= nova_api %> --rabbit_host=<%= rabbitmq %> --cc_host=<%= nova_api %> --nova_url=http://<%= nova_api %>:8774/v1.1/ --routing_source_ip=<%= nova_api %> --glance_api_servers=<%= glance %>:9292 --image_service=nova.image.glance.GlanceImageService --iscsi_ip_prefix=192.168.22 --sql_connection=mysql://root:nova@<%= mysql %>/nova --ec2_url=http://<%= nova_api %>:8773/services/Cloud --keystone_ec2_url=http://<%= keystone %>:5000/v2.0/ec2tokens --api_paste_config=/etc/nova/api-paste.ini --libvirt_type=<%= libvirt_type %> --libvirt_use_virtio_for_bridges=true --start_guests_on_host_boot=true --resume_guests_state_on_host_boot=true --vnc_enabled=true --novncproxy_base_url=http://<%= nova_api %>:6080/vnc_auto.html
--network_manager=nova.network.manager.VlanManager --public_interface=eth0 --flat_interface=eth0 --vlan_interface=eth0 --flat_network_bridge=br100 --fixed_range=192.168.22.32/27 --floating_range=10.35.17.240/30 --network_size=8
--flat_injected=False --force_dhcp_release --iscsi_helper=tgtadm --connection_type=libvirt --root_helper=sudo nova-rootwrap --verbose /etc/nova/api-paste.ini ############
############ [composite:metadata] use = egg:Paste#urlmap /: metaversions /latest: meta /1.0: meta /2007-01-19: meta /2007-03-01: meta /2007-08-29: meta /2007-10-10: meta /2007-12-15: meta /2008-02-01: meta /2008-09-01: meta /2009-04-04: meta
[pipeline:metaversions] pipeline = ec2faultwrap logrequest metaverapp
[pipeline:meta] pipeline = ec2faultwrap logrequest metaapp
[app:metaverapp] paste.app_factory = nova.api.metadata.handler:Versions.factory
[app:metaapp] paste.app_factory = nova.api.metadata.handler:MetadataRequestHandler.factory
#######
#######
[composite:ec2] use = egg:Paste#urlmap /services/Cloud: ec2cloud
[composite:ec2cloud] use = call:nova.api.auth:pipeline_factory noauth = ec2faultwrap logrequest ec2noauth cloudrequest validator ec2executor deprecated = ec2faultwrap logrequest authenticate cloudrequest validator ec2executor keystone = ec2faultwrap logrequest ec2keystoneauth cloudrequest validator ec2executor
[filter:ec2faultwrap] paste.filter_factory = nova.api.ec2:FaultWrapper.factory
[filter:logrequest] paste.filter_factory = nova.api.ec2:RequestLogging.factory
[filter:ec2lockout] paste.filter_factory = nova.api.ec2:Lockout.factory
[filter:totoken] paste.filter_factory = nova.api.ec2:EC2Token.factory
[filter:ec2keystoneauth] paste.filter_factory = nova.api.ec2:EC2KeystoneAuth.factory
[filter:ec2noauth] paste.filter_factory = nova.api.ec2:NoAuth.factory
[filter:authenticate] paste.filter_factory = nova.api.ec2:Authenticate.factory
[filter:cloudrequest] controller = nova.api.ec2.cloud.CloudController paste.filter_factory = nova.api.ec2:Requestify.factory
[filter:authorizer] paste.filter_factory = nova.api.ec2:Authorizer.factory
[filter:validator] paste.filter_factory = nova.api.ec2:Validator.factory
[app:ec2executor] paste.app_factory = nova.api.ec2:Executor.factory
#############
#############
[composite:osapi_compute] use = call:nova.api.openstack.urlmap:urlmap_factory /: oscomputeversions /v1.1: openstack_compute_api_v2 /v2: openstack_compute_api_v2
[composite:osapi_volume] use = call:nova.api.openstack.urlmap:urlmap_factory /: osvolumeversions /v1: openstack_volume_api_v1
[composite:openstack_compute_api_v2] use = call:nova.api.auth:pipeline_factory noauth = faultwrap noauth ratelimit osapi_compute_app_v2 deprecated = faultwrap auth ratelimit osapi_compute_app_v2 keystone = faultwrap authtoken keystonecontext ratelimit osapi_compute_app_v2 keystone_nolimit = faultwrap authtoken keystonecontext osapi_compute_app_v2
[composite:openstack_volume_api_v1] use = call:nova.api.auth:pipeline_factory noauth = faultwrap noauth ratelimit osapi_volume_app_v1 deprecated = faultwrap auth ratelimit osapi_volume_app_v1 keystone = faultwrap authtoken keystonecontext ratelimit osapi_volume_app_v1 keystone_nolimit = faultwrap authtoken keystonecontext osapi_volume_app_v1
[filter:faultwrap] paste.filter_factory = nova.api.openstack:FaultWrapper.factory
[filter:auth] paste.filter_factory = nova.api.openstack.auth:AuthMiddleware.factory
[filter:noauth] paste.filter_factory = nova.api.openstack.auth:NoAuthMiddleware.factory
[filter:ratelimit] paste.filter_factory = nova.api.openstack.compute.limits:RateLimitingMiddleware.factory
[app:osapi_compute_app_v2] paste.app_factory = nova.api.openstack.compute:APIRouter.factory
[pipeline:oscomputeversions] pipeline = faultwrap oscomputeversionapp
[app:osapi_volume_app_v1] paste.app_factory = nova.api.openstack.volume:APIRouter.factory
[app:oscomputeversionapp] paste.app_factory = nova.api.openstack.compute.versions:Versions.factory
[pipeline:osvolumeversions] pipeline = faultwrap osvolumeversionapp
[app:osvolumeversionapp] paste.app_factory = nova.api.openstack.volume.versions:Versions.factory
##########
##########
[filter:keystonecontext] paste.filter_factory = nova.api.auth:NovaKeystoneContext.factory
[filter:authtoken] paste.filter_factory = keystone.middleware.auth_token:filter_factory service_protocol = http service_host = <%= keystone %> service_port = 5000 auth_host = <%= keystone %> auth_port = 35357 auth_protocol = http auth_uri = http://<%= keystone %>:5000/ admin_tenant_name = <%= admin_tenant_name %> admin_user = <%= admin_user %> admin_password = <%= admin_password %>
Narjisse
Could you paste the full contents of console for the instance? The initial part was ignored.
Xiaohua
[ 0.000000] Initializing cgroup subsys cpuset
[ 0.000000] Initializing cgroup subsys cpu
[ 0.000000] Linux version 2.6.35-22-virtual (buildd@yellow) (gcc version 4.4.5 (Ubuntu/Linaro 4.4.4-14ubuntu5) ) #35-Ubuntu SMP Sat Oct 16 23:19:29 UTC 2010 (Ubuntu 2.6.35-22.35-virtual 2.6.35.4)
[ 0.000000] Command line: root=/dev/vda console=ttyS0
[ 0.000000] BIOS-provided physical RAM map:
[ 0.000000] BIOS-e820: 0000000000000000 - 000000000009bc00 (usable)
[ 0.000000] BIOS-e820: 000000000009bc00 - 00000000000a0000 (reserved)
[ 0.000000] BIOS-e820: 00000000000f0000 - 0000000000100000 (reserved)
[ 0.000000] BIOS-e820: 0000000000100000 - 000000007fffd000 (usable)
[ 0.000000] BIOS-e820: 000000007fffd000 - 0000000080000000 (reserved)
[ 0.000000] BIOS-e820: 00000000fffc0000 - 0000000100000000 (reserved)
[ 0.000000] NX (Execute Disable) protection: active
[ 0.000000] DMI 2.4 present.
[ 0.000000] No AGP bridge found
[ 0.000000] last_pfn = 0x7fffd max_arch_pfn = 0x400000000
[ 0.000000] x86 PAT enabled: cpu 0, old 0x7040600070406, new 0x7010600070106
[ 0.000000] Scanning 1 areas for low memory corruption
[ 0.000000] modified physical RAM map:
[ 0.000000] modified: 0000000000000000 - 0000000000010000 (reserved)
[ 0.000000] modified: 0000000000010000 - 000000000009bc00 (usable)
[ 0.000000] modified: 000000000009bc00 - 00000000000a0000 (reserved)
[ 0.000000] modified: 00000000000f0000 - 0000000000100000 (reserved)
[ 0.000000] modified: 0000000000100000 - 000000007fffd000 (usable)
[ 0.000000] modified: 000000007fffd000 - 0000000080000000 (reserved)
[ 0.000000] modified: 00000000fffc0000 - 0000000100000000 (reserved)
[ 0.000000] found SMP MP-table at [ffff8800000fdae0] fdae0
[ 0.000000] init_memory_mapping: 0000000000000000-000000007fffd000
[ 0.000000] RAMDISK: 7ffd8000 - 7fff0000
[ 0.000000] ACPI: RSDP 00000000000fd980 00014 (v00 BOCHS )
[ 0.000000] ACPI: RSDT 000000007fffd7b0 00034 (v01 BOCHS BXPCRSDT 00000001 BXPC 00000001)
[ 0.000000] ACPI: FACP 000000007fffff80 00074 (v01 BOCHS BXPCFACP 00000001 BXPC 00000001)
[ 0.000000] ACPI: DSDT 000000007fffd9b0 02589 (v01 BXPC BXDSDT 00000001 INTL 20100528)
[ 0.000000] ACPI: FACS 000000007fffff40 00040
[ 0.000000] ACPI: SSDT 000000007fffd910 0009E (v01 BOCHS BXPCSSDT 00000001 BXPC 00000001)
[ 0.000000] ACPI: APIC 000000007fffd830 00072 (v01 BOCHS BXPCAPIC 00000001 BXPC 00000001)
[ 0.000000] ACPI: HPET 000000007fffd7f0 00038 (v01 BOCHS BXPCHPET 00000001 BXPC 00000001)
[ 0.000000] No NUMA configuration found
[ 0.000000] Faking a node at 0000000000000000-000000007fffd000
[ 0.000000] Initmem setup node 0 0000000000000000-000000007fffd000
[ 0.000000] NODE_DATA [0000000001d1b080 - 0000000001d2007f]
[ 0.000000] Zone PFN ranges:
[ 0.000000] DMA 0x00000010 -> 0x00001000
[ 0.000000] DMA32 0x00001000 -> 0x00100000
[ 0.000000] Normal empty
[ 0.000000] Movable zone start PFN for each node
[ 0.000000] early_node_map[2] active PFN ranges
[ 0.000000] 0: 0x00000010 -> 0x0000009b
[ 0.000000] 0: 0x00000100 -> 0x0007fffd
[ 0.000000] ACPI: PM-Timer IO Port: 0xb008
[ 0.000000] ACPI: LAPIC (acpi_id[0x00] lapic_id[0x00] enabled)
[ 0.000000] ACPI: IOAPIC (id[0x01] address[0xfec00000] gsi_base[0])
[ 0.000000] IOAPIC[0]: apic_id 1, version 17, address 0xfec00000, GSI 0-23
[ 0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
[ 0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
[ 0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
[ 0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
[ 0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
[ 0.000000] Using ACPI (MADT) for SMP configuration information
[ 0.000000] ACPI: HPET id: 0x8086a201 base: 0xfed00000
[ 0.000000] SMP: Allowing 1 CPUs, 0 hotplug CPUs
[ 0.000000] PM: Registered nosave memory: 000000000009b000 - 000000000009c000
[ 0.000000] PM: Registered nosave memory: 000000000009c000 - 00000000000a0000
[ 0.000000] PM: Registered nosave memory: 00000000000a0000 - 00000000000f0000
[ 0.000000] PM: Registered nosave memory: 00000000000f0000 - 0000000000100000
[ 0.000000] Allocating PCI resources starting at 80000000 (gap: 80000000:7ffc0000)
[ 0.000000] Booting paravirtualized kernel on bare hardware
[ 0.000000] setup_percpu: NR_CPUS:64 nr_cpumask_bits:64 nr_cpu_ids:1 nr_node_ids:1
[ 0.000000] PERCPU: Embedded 30 pages/cpu @ffff880001e00000 s91520 r8192 d23168 u2097152
[ 0.000000] pcpu-alloc: s91520 r8192 d23168 u2097152 alloc=12097152
[ 0.000000] pcpu-alloc: [0] 0
[ 0.000000] Built 1 zonelists in Node order, mobility grouping on. Total pages: 517000
[ 0.000000] Policy zone: DMA32
[ 0.000000] Kernel command line: root=/dev/vda console=ttyS0
[ 0.000000] PID hash table entries: 4096 (order: 3, 32768 bytes)
[ 0.000000] Checking aperture...
[ 0.000000] No AGP bridge found
[ 0.000000] Subtract (41 early reservations)
[ 0.000000] #1 [0001000000 - 0001d1a954] TEXT DATA BSS
[ 0.000000] #2 [007ffd8000 - 007fff0000] RAMDISK
[ 0.000000] #3 [0001d1b000 - 0001d1b071] BRK
[ 0.000000] #4 [000009bc00 - 00000fdae0] BIOS reserved
[ 0.000000] #5 [00000fdae0 - 00000fdaf0] MP-table mpf
[ 0.000000] #6 [00000fdbe8 - 0000100000] BIOS reserved
[ 0.000000] #7 [00000fdaf0 - 00000fdbe8] MP-table mpc
[ 0.000000] #8 [0000010000 - 0000012000] TRAMPOLINE
[ 0.000000] #9 [0000012000 - 0000016000] ACPI WAKEUP
[ 0.000000] #10 [0000016000 - 0000018000] PGTABLE
[ 0.000000] #11 [0001d1b080 - 0001d20080] NODE_DATA
[ 0.000000] #12 [0001d20080 - 0001d21080] BOOTMEM
[ 0.000000] #13 [0000018000 - 0000018180] BOOTMEM
[ 0.000000] #14 [0002522000 - 0002523000] BOOTMEM
[ 0.000000] #15 [0002523000 - 0002524000] BOOTMEM
[ 0.000000] #16 [0002600000 - 0004200000] MEMMAP 0
[ 0.000000] #17 [0001d21080 - 0001d39080] BOOTMEM
[ 0.000000] #18 [0001d39080 - 0001d51080] BOOTMEM
[ 0.000000] #19 [0001d52000 - 0001d53000] BOOTMEM
[ 0.000000] #20 [0001d1a980 - 0001d1a9c1] BOOTMEM
[ 0.000000] #21 [0001d1aa00 - 0001d1aa43] BOOTMEM
[ 0.000000] #22 [0001d1aa80 - 0001d1ac08] BOOTMEM
[ 0.000000] #23 [0001d1ac40 - 0001d1aca8] BOOTMEM
[ 0.000000] #24 [0001d1acc0 - 0001d1ad28] BOOTMEM
[ 0.000000] #25 [0001d1ad40 - 0001d1ada8] BOOTMEM
[ 0.000000] #26 [0001d1adc0 - 0001d1ae28] BOOTMEM
[ 0.000000] #27 [0001d1ae40 - 0001d1aea8] BOOTMEM
[ 0.000000] #28 [0001d1aec0 - 0001d1af28] BOOTMEM
[ 0.000000] #29 [0001d1af40 - 0001d1af60] BOOTMEM
[ 0.000000] #30 [0001d1af80 - 0001d1af9c] BOOTMEM
[ 0.000000] #31 [0001d1afc0 - 0001d1afdc] BOOTMEM
[ 0.000000] #32 [0001e00000 - 0001e1e000] BOOTMEM
[ 0.000000] #33 [0001d51080 - 0001d51088] BOOTMEM
[ 0.000000] #34 [0001d510c0 - 0001d510c8] BOOTMEM
[ 0.000000] #35 [0001d51100 - 0001d51104] BOOTMEM
[ 0.000000] #36 [0001d51140 - 0001d51148] BOOTMEM
[ 0.000000] #37 [0001d51180 - 0001d512d0] BOOTMEM
[ 0.000000] #38 [0001d51300 - 0001d51380] BOOTMEM
[ 0.000000] #39 [0001d51380 - 0001d51400] BOOTMEM
[ 0.000000] #40 [0001d53000 - 0001d5b000] BOOTMEM
[ 0.000000] Memory: 2054064k/2097140k available (5816k kernel code, 468k absent, 42608k reserved, 5366k data, 828k init)
[ 0.000000] SLUB: Genslabs=14, HWalign=64, Order=0-3, MinObjects=0, CPUs=1, Nodes=1
[ 0.000000] Hierarchical RCU implementation.
[ 0.000000] RCU dyntick-idle grace-period acceleration is enabled.
[ 0.000000] RCU-based detection of stalled CPUs is disabled.
[ 0.000000] Verbose stalled-CPUs detection is disabled.
[ 0.000000] NR_IRQS:4352 nr_irqs:256
[ 0.000000] Console: colour VGA+ 80x25
[ 0.000000] console [ttyS0] enabled
[ 0.000000] allocated 20971520 bytes of page_cgroup
[ 0.000000] please try 'cgroup_disable=memory' option if you don't want memory cgroups
[ 0.000000] Fast TSC calibration using PIT
[ 0.000000] Detected 3095.325 MHz processor.
[ 0.000412] Calibrating delay loop (skipped), value calculated using timer frequency.. 6190.65 BogoMIPS (lpj=30953250)
[ 0.000845] pid_max: default: 32768 minimum: 301
[ 0.001487] Security Framework initialized
[ 0.002885] AppArmor: AppArmor initialized
[ 0.003018] Yama: becoming mindful.
[ 0.014872] Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes)
[ 0.022687] Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes)
[ 0.024678] Mount-cache hash table entries: 256
[ 0.031127] Initializing cgroup subsys ns
[ 0.031417] Initializing cgroup subsys cpuacct
[ 0.031613] Initializing cgroup subsys memory
[ 0.032120] Initializing cgroup subsys devices
[ 0.032310] Initializing cgroup subsys freezer
[ 0.032471] Initializing cgroup subsys net_cls
[ 0.033644] mce: CPU supports 10 MCE banks
[ 0.034418] Performance Events: AMD PMU driver.
[ 0.034828] ... version: 0
[ 0.034980] ... bit width: 48
[ 0.035121] ... generic registers: 4
[ 0.035292] ... value mask: 0000ffffffffffff
[ 0.035452] ... max period: 00007fffffffffff
[ 0.035619] ... fixed-purpose events: 0
[ 0.035744] ... event mask: 000000000000000f
[ 0.036343] SMP alternatives: switching to UP code
[ 0.177611] Freeing SMP alternatives: 24k freed
[ 0.178327] ACPI: Core revision 20100428
[ 0.202928] ftrace: converting mcount calls to 0f 1f 44 00 00
[ 0.203156] ftrace: allocating 23035 entries in 91 pages
[ 0.224733] Setting APIC routing to flat
[ 0.226910] ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1
[ 0.327548] CPU0: AMD QEMU Virtual CPU version 1.0 stepping 03
[ 0.330000] Brought up 1 CPUs
[ 0.330000] Total of 1 processors activated (6190.65 BogoMIPS).
[ 0.330000] devtmpfs: initialized
[ 0.368652] regulator: core version 0.5
[ 0.369026] Time: 8:47:15 Date: 07/18/12
[ 0.369934] NET: Registered protocol family 16
[ 0.373786] ACPI: bus type pci registered
[ 0.375198] PCI: Using configuration type 1 for base access
[ 0.384489] bio: create slab
init started: BusyBox v1.17.2 (2010-10-17 16:10:18 MST) stty: /dev/console
[1;31mttylinux 12.1[0;39m [1;34m > [1;36mhttp://ttylinux.org/[0;39m [1;34m > [1;37mhostname: ttylinux_host[0;39m
load Kernel Module: acpiphp [ [1;32mOK[0;39m ] load Kernel Module: e1000 [ [1;32mOK[0;39m ] load Kernel Module: ne2k-pci [ [1;32mOK[0;39m ] load Kernel Module: 8139cp [ [1;32mOK[0;39m ] load Kernel Module: pcnet32 [ [1;32mOK[0;39m ] load Kernel Module: mii [ [1;32mOK[0;39m ] load Kernel Module: ip_tables [ [1;32mOK[0;39m ] file systems checked [ [1;32mOK[0;39m ] mounting local file systems [ [1;32mOK[0;39m ] setting up system clock [utc] Wed Jul 18 08:47:17 UTC 2012 [ [1;32mOK[0;39m ] stty: /dev/console stty: /dev/console initializing random number generator [[1;33mWATING[0;39m][-11G[1;34m..[0;39m [ [1;32mOK[0;39m ] stty: /dev/console startup klogd [ [1;32mOK[0;39m ] startup syslogd [ [1;32mOK[0;39m ] stty: /dev/console stty: /dev/console bringing up loopback interface lo [ [1;32mOK[0;39m ] stty: /dev/console udhcpc (v1.17.2) started Sending discover... Sending discover... Sending discover... No lease, forking to background starting DHCP forEthernet interface eth0 [ [1;32mOK[0;39m ] cloud-setup: checking http://169.254.169.254/2009-04-04/meta-data/instance-id wget: can't connect to remote host (169.254.169.254): Network is unreachable cloud-setup: failed 1/30: up 14.49. request failed wget: can't connect to remote host (169.254.169.254): Network is unreachable cloud-setup: failed 2/30: up 15.58. request failed wget: can't connect to remote host (169.254.169.254): Network is unreachable cloud-setup: failed 3/30: up 16.66. request failed wget: can't connect to remote host (169.254.169.254): Network is unreachable cloud-setup: failed 4/30: up 17.75. request failed wget: can't connect to remote host (169.254.169.254): Network is unreachable cloud-setup: failed 5/30: up 18.83. request failed wget: can't connect to remote host (169.254.169.254): Network is unreachable cloud-setup: failed 6/30: up 19.92. request failed wget: can't connect to remote host (169.254.169.254): Network is unreachable cloud-setup: failed 7/30: up 21.01. request failed wget: can't connect to remote host (169.254.169.254): Network is unreachable cloud-setup: failed 8/30: up 22.10. request failed wget: can't connect to remote host (169.254.169.254): Network is unreachable cloud-setup: failed 9/30: up 23.20. request failed wget: can't connect to remote host (169.254.169.254): Network is unreachable cloud-setup: failed 10/30: up 24.29. request failed wget: can't connect to remote host (169.254.169.254): Network is unreachable cloud-setup: failed 11/30: up 25.38. request failed wget: can't connect to remote host (169.254.169.254): Network is unreachable cloud-setup: failed 12/30: up 26.47. request failed wget: can't connect to remote host (169.254.169.254): Network is unreachable cloud-setup: failed 13/30: up 27.57. request failed wget: can't connect to remote host (169.254.169.254): Network is unreachable cloud-setup: failed 14/30: up 28.66. request failed wget: can't connect to remote host (169.254.169.254): Network is unreachable cloud-setup: failed 15/30: up 29.76. request failed wget: can't connect to remote host (169.254.169.254): Network is unreachable cloud-setup: failed 16/30: up 30.85. request failed wget: can't connect to remote host (169.254.169.254): Network is unreachable cloud-setup: failed 17/30: up 31.95. request failed wget: can't connect to remote host (169.254.169.254): Network is unreachable cloud-setup: failed 18/30: up 33.05. request failed wget: can't connect to remote host (169.254.169.254): Network is unreachable cloud-setup: failed 19/30: up 34.15. request failed wget: can't connect to remote host (169.254.169.254): Network is unreachable cloud-setup: failed 20/30: up 35.25. request failed wget: can't connect to remote host (169.254.169.254): Network is unreachable cloud-setup: failed 21/30: up 36.35. request failed wget: can't connect to remote host (169.254.169.254): Network is unreachable cloud-setup: failed 22/30: up 37.45. request failed wget: can't connect to remote host (169.254.169.254): Network is unreachable cloud-setup: failed 23/30: up 38.56. request failed wget: can't connect to remote host (169.254.169.254): Network is unreachable cloud-setup: failed 24/30: up 39.66. request failed wget: can't connect to remote host (169.254.169.254): Network is unreachable cloud-setup: failed 25/30: up 40.77. request failed wget: can't connect to remote host (169.254.169.254): Network is unreachable cloud-setup: failed 26/30: up 41.88. request failed wget: can't connect to remote host (169.254.169.254): Network is unreachable cloud-setup: failed 27/30: up 42.99. request failed wget: can't connect to remote host (169.254.169.254): Network is unreachable cloud-setup: failed 28/30: up 44.10. request failed wget: can't connect to remote host (169.254.169.254): Network is unreachable cloud-setup: failed 29/30: up 45.20. request failed wget: can't connect to remote host (169.254.169.254): Network is unreachable cloud-setup: failed 30/30: up 46.32. request failed cloud-setup: after 30 fails, debugging cloud-setup: running debug (30 tries reached) ############ debug start ##############
stty: /dev/console startup dropbear [ [1;32mOK[0;39m ] route: fscanf
eth0 Link encap:Ethernet HWaddr FA:16:3E:69:7A:6B
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:15 errors:0 dropped:0 overruns:0 frame:0
TX packets:12 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:2342 (2.2 KiB) TX bytes:2164 (2.1 KiB)
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
UP LOOPBACK RUNNING MTU:16436 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface route: fscanf
cat: can't open '/etc/resolv.conf': No such file or directory
/etc/rc.d/init.d/cloud-functions: line 41: /etc/resolv.conf: No such file or directory
Linux ttylinux_host 2.6.35-22-virtual #35-Ubuntu SMP Sat Oct 16 23:19:29 UTC 2010 x86_64 GNU/Linux
Module Size Used by ip_tables 18737 0 x_tables 24391 1 ip_tables pcnet32 36585 0 8139cp 20333 0 mii 5261 2 pcnet32,8139cp ne2k_pci 7802 0 8390 9897 1 ne2k_pci e1000 110274 0 acpiphp 18752 0
OK. There is the message below.
udhcpc (v1.17.2) started Sending discover... Sending discover... Sending discover... No lease, forking to background
So the instance didn't get private IP from dhcp server on nova-network node. Firstly could you confirm the process of dnsmasq on nova-network node? Then paste the output of command ifconfig of two nodes.
Xiaohua
maybe that's the problem:
service dnsmasq status
Narjisse
you can confirm the process dnsmasq with command "ps aux | grep dns".
Xiaohua
here's the output:
ps aux | grep dns root 17593 0.0 0.0 9376 940 pts/13 S+ 11:48 0:00 grep --color=auto dns nobody 27696 0.0 0.0 28812 1088 ? S Jul16 0:08 /usr/sbin/dnsmasq --strict-order --bind-interfaces --conf-file= --domain=novalocal --pid-file=/var/lib/nova/networks/nova-br100.pid --listen-address=10.35.17.225 --except-interface=lo --dhcp-range=10.35.17.227,static,120s --dhcp-lease-max=8 --dhcp-hostsfile=/var/lib/nova/networks/nova-br100.conf --dhcp-script=/usr/bin/nova-dhcpbridge --leasefile-ro root 27697 0.0 0.0 28784 436 ? S Jul16 0:00 /usr/sbin/dnsmasq --strict-order --bind-interfaces --conf-file= --domain=novalocal --pid-file=/var/lib/nova/networks/nova-br100.pid --listen-address=10.35.17.225 --except-interface=lo --dhcp-range=10.35.17.227,static,120s --dhcp-lease-max=8 --dhcp-hostsfile=/var/lib/nova/networks/nova-br100.conf --dhcp-script=/usr/bin/nova-dhcpbridge --leasefile-ro
Narjisse
Maybe you can kill all the dnsmasq process, then restart nova-network, and then start a new instance.
Xiaohua
now instance launch fails with network error ProcessExecutionError: Unexpected error while running command. 2012-07-18 12:11:29 TRACE nova.rpc.amqp Command: sudo nova-rootwrap FLAGFILE=/etc/nova/nova.conf NETWORK_ID=3 dnsmasq --strict-order --bind-interfaces --conf-file= --domain=novalocal --pid-file=/var/lib/nova/networks/nova-br100.pid --listen-address=192.168.22.1 --except-interface=lo --dhcp-range=192.168.22.3,static,120s --dhcp-lease-max=8 --dhcp-hostsfile=/var/lib/nova/networks/nova-br100.conf --dhcp-script=/usr/bin/nova-dhcpbridge --leasefile-ro 2012-07-18 12:11:29 TRACE nova.rpc.amqp Exit code: 2 2012-07-18 12:11:29 TRACE nova.rpc.amqp Stdout: ''
and i can't add any other network
nova-manage network create --fixed_range_v4=172.168.22.0/28 --label=my_network
Subnet(s) too large, defaulting to /29. To override, specify network_size flag.
2012-07-18 12:18:26 DEBUG nova.utils [req-e1f26343-449e-41ea-b0f3-324973f70703 None None] backend <module 'nova.db.sqlalchemy.api' from '/usr/lib/python2.7/dist-packages/nova/db/sqlalchemy/api.pyc'> from (pid=5765) __get_backend /usr/lib/python2.7/dist-packages/nova/utils.py:658
Command failed, please check log for more info
2012-07-18 12:18:26 CRITICAL nova [req-e1f26343-449e-41ea-b0f3-324973f70703 None None] Detected existing vlan with id 100
2012-07-18 12:18:26 TRACE nova Traceback (most recent call last):
2012-07-18 12:18:26 TRACE nova File "/usr/bin/nova-manage", line 1746, in
I get the same Vlan id on the error for every value I try to add
Narjisse
OK. I can't find the reason. Could you do all-in-one firstly, then try multiple ones?
Xiaohua
I already have an all in one openstack install (from scratch no dodai-deploy and no puppet), its runing ok. I want to get over networking issues between multiple compute nodes.
Anyway I'm gonna try the all in one install as you advised just to see if everything works fine do you recommend any specific cleaning after using the teardown.sh script?
Narjisse
OK, if you have done all-in-one, you can tried to uninstall nova and glance, then reinstall them.
Xiaohua
After uninstalling the nova proposal, changing the range, reinstalling it, testing, adding a floating range and creating a new instance from the "mybucket" template I can finally acces the network correctly in ssh and ping even with instances on the sencond node.
thanx a lot for your help and for this great tool.
Narjisse
You are welcome.
Xiaohua
Hi Xiaohua,
I am running a 2 node setup. Should nova-network be running on just 1 server? I have an issue where all VMs going to the 2nd nova-compute node aren't receiving their static or floating IPs after booting up. VNC to them works fine, it is just the IP leasing that appears not to be working.
Binary Host Zone Status State Updated_At nova-compute openstack-1 nova enabled :-) 2012-08-22 17:31:07 nova-compute openstack-2 nova enabled :-) 2012-08-22 17:31:14 nova-consoleauth openstack-1 nova enabled :-) 2012-08-22 17:31:06 nova-cert openstack-1 nova enabled :-) 2012-08-22 17:31:07 nova-volume openstack-1 nova enabled :-) 2012-08-22 17:31:07 nova-network openstack-1 nova enabled :-) 2012-08-22 17:31:07 nova-scheduler openstack-1 nova enabled :-) 2012-08-22 17:31:07
Dnsmasq on openstack-1:
dnsmasq 1514 0.0 0.0 28812 960 ? S 10:02 0:00 /usr/sbin/dnsmasq -x /var/run/dnsmasq/dnsmasq.pid -u dnsmasq -r /var/run/dnsmasq/resolv.conf -7 /etc/dnsmasq.d,.dpkg-dist,.dpkg-old,.dpkg-new 113 1772 0.0 0.0 25964 948 ? S 10:02 0:00 /usr/sbin/dnsmasq -u libvirt-dnsmasq --strict-order --bind-interfaces --pid-file=/var/run/libvirt/network/default.pid --conf-file= --except-interface lo --listen-address 192.168.122.1 --dhcp-range 192.168.122.2,192.168.122.254 --dhcp-leasefile=/var/lib/libvirt/dnsmasq/default.leases --dhcp-lease-max=253 --dhcp-no-override nobody 2344 0.0 0.0 28812 992 ? S 10:02 0:00 /usr/sbin/dnsmasq --strict-order --bind-interfaces --conf-file= --domain=novalocal --pid-file=/var/lib/nova/networks/nova-br100.pid --listen-address=192.168.101.1 --except-interface=lo --dhcp-range=192.168.101.3,static,120s --dhcp-lease-max=256 --dhcp-hostsfile=/var/lib/nova/networks/nova-br100.conf --dhcp-script=/usr/bin/nova-dhcpbridge --leasefile-ro root 2345 0.0 0.0 28784 452 ? S 10:02 0:00 /usr/sbin/dnsmasq --strict-order --bind-interfaces --conf-file= --domain=novalocal --pid-file=/var/lib/nova/networks/nova-br100.pid --listen-address=192.168.101.1 --except-interface=lo --dhcp-range=192.168.101.3,static,120s --dhcp-lease-max=256 --dhcp-hostsfile=/var/lib/nova/networks/nova-br100.conf --dhcp-script=/usr/bin/nova-dhcpbridge --leasefile-ro
and dnsmasq on openstack-2:
root@openstack-2:~# ps aux | grep dns 110 3278 0.0 0.0 25964 952 ? S Aug20 0:01 /usr/sbin/dnsmasq -u libvirt-dnsmasq --strict-order --bind-interfaces --pid-file=/var/run/libvirt/network/default.pid --conf-file= --except-interface lo --listen-address 192.168.122.1 --dhcp-range 192.168.122.2,192.168.122.254 --dhcp-leasefile=/var/lib/libvirt/dnsmasq/default.leases --dhcp-lease-max=253 --dhcp-no-override
Not sure where this 192.168.122.1 network is coming from. I have 192.168.101.0/24 setup in my proposal.
Any suggestions would be appreciated.
Hi Xiaohua,
Nevermind - figured it out. The switch uplinks needed to have the nova-network vlan set. In this case, vlan 100 needed to be tagged.
Hi,
as you suggested I'm contacting you by email about a problem I'm facing with my dodai-deploy installation. http://www.guanxiaohua2k6.com/2012/04/install-openstack-nova-essexmultiple.html?showComment=1342077925664#c6214097199447870768
After I checked both servers where runing I was able too install keystone on the localhost, but when I tried to install glance it showed the proposal as installed but on the server glance is nonexistant. the job_server.log is :
{:operation=>"install", :params=>{:proposal_id=>"2"}} Start install[proposal - 2] install components Determining the amount of hosts matching filter for 5 seconds .... 1
1 / 1
[#<MCollective::RPC::Result:0x7f94d0cb51f8 @results={:data=>{:output=>"\e[0;32minfo: Creating a new SSL key for server2.gemalto.com\e[0m\n\e[0;32minfo: Caching certificate for ca\e[0m\n\e[0;32minfo: Creating a new SSL certificate request for server2.gemalto.com\e[0m\n\e[0;32minfo: Certificate Request fingerprint (md5): 7C:8A:3B:7C:B2:4E:4A:62:03:FF:F9:95:49:83:03:CB\e[0m\n\e[0;32minfo: Caching certificate for server2.gemalto.com\e[0m\n\e[0;32minfo: Caching certificate_revocation_list for ca\e[0m\n\e[0;32minfo: Caching catalog for server2.gemalto.com\e[0m\n\e[0;32minfo: Applying configuration version '1342079262'\e[0m\n\e[0;32minfo: Creating state file /var/lib/puppet/state/state.yaml\e[0m\n\e[0;36mnotice: Finished catalog run in 0.03 seconds\e[0m\n"}, :statuscode=>0, :statusmsg=>"OK", :sender=>"Server2.gemalto.com"}, @action="runonce", @agent="puppetd">] --- !ruby/object:MCollective::RPC::Result action: runonce agent: puppetd results: :data: :output: "\e[0;32minfo: Creating a new SSL key for server2.gemalto.com\e[0m\n\ \e[0;32minfo: Caching certificate for ca\e[0m\n\ \e[0;32minfo: Creating a new SSL certificate request for server2.gemalto.com\e[0m\n\ \e[0;32minfo: Certificate Request fingerprint (md5): 7C:8A:3B:7C:B2:4E:4A:62:03:FF:F9:95:49:83:03:CB\e[0m\n\ \e[0;32minfo: Caching certificate for server2.gemalto.com\e[0m\n\ \e[0;32minfo: Caching certificate_revocation_list for ca\e[0m\n\ \e[0;32minfo: Caching catalog for server2.gemalto.com\e[0m\n\ \e[0;32minfo: Applying configuration version '1342079262'\e[0m\n\ \e[0;32minfo: Creating state file /var/lib/puppet/state/state.yaml\e[0m\n\ \e[0;36mnotice: Finished catalog run in 0.03 seconds\e[0m\n" :statuscode: 0 :statusmsg: OK :sender: Server2.gemalto.com install[proposal - 2] finished {:operation=>"test", :params=>{:proposal_id=>"2"}} Start test[proposal - 2] Determining the amount of hosts matching filter for 5 seconds .... 1
1 / 1
[#<MCollective::RPC::Result:0x7f94d0c0e3f8 @results={:data=>{:output=>"\e[0;32minfo: Caching catalog for server2.gemalto.com\e[0m\n\e[0;32minfo: Applying configuration version '1342079262'\e[0m\n\e[0;36mnotice: Finished catalog run in 0.03 seconds\e[0m\n"}, :statuscode=>0, :statusmsg=>"OK", :sender=>"Server2.gemalto.com"}, @action="runonce", @agent="puppetd">] --- !ruby/object:MCollective::RPC::Result action: runonce agent: puppetd results: :data: :output: "\e[0;32minfo: Caching catalog for server2.gemalto.com\e[0m\n\ \e[0;32minfo: Applying configuration version '1342079262'\e[0m\n\ \e[0;36mnotice: Finished catalog run in 0.03 seconds\e[0m\n" :statuscode: 0 :statusmsg: OK :sender: Server2.gemalto.com test[proposal - 2] finished
and the yaml file on the seond node :
"File[/var/lib/puppet/facts]": !ruby/sym checked: 2012-07-12 09:51:45.268143 +02:00 !ruby/sym synced: 2012-07-12 09:51:45.137156 +02:00 "Class[Main]": !ruby/sym checked: 2012-07-12 10:07:00.142167 +02:00 "File[/var/lib/puppet/ssl/private_keys]": !ruby/sym checked: 2012-07-12 09:51:45.271698 +02:00 !ruby/sym synced: 2012-07-12 09:51:45.142976 +02:00 "File[/etc/puppet/puppet.conf]": !ruby/sym checked: 2012-07-12 09:51:46.763554 +02:00 !ruby/sym configuration: {} "File[/var/lib/puppet/ssl/certificate_requests]": !ruby/sym checked: 2012-07-12 09:51:45.279479 +02:00 !ruby/sym synced: 2012-07-12 09:51:45.159266 +02:00 "Stage[main]": !ruby/sym checked: 2012-07-12 10:07:00.143555 +02:00 "Class[Glance_e]": !ruby/sym checked: 2012-07-12 10:07:00.140399 +02:00 "File[/var/lib/puppet/client_data]": !ruby/sym checked: 2012-07-12 09:51:46.764492 +02:00 !ruby/sym synced: 2012-07-12 09:51:45.149191 +02:00 "File[/var/lib/puppet/ssl/private]": !ruby/sym checked: 2012-07-12 09:51:45.277418 +02:00 !ruby/sym synced: 2012-07-12 09:51:45.154989 +02:00 "File[/var/lib/puppet/clientbucket]": !ruby/sym checked: 2012-07-12 09:51:46.767397 +02:00 !ruby/sym synced: 2012-07-12 09:51:45.157621 +02:00 "File[/var/lib/puppet/client_yaml]": !ruby/sym checked: 2012-07-12 09:51:46.766446 +02:00 !ruby/sym synced: 2012-07-12 09:51:45.153167 +02:00 "File[/var/lib/puppet/lib]": !ruby/sym checked: 2012-07-12 09:51:45.278580 +02:00 !ruby/sym synced: 2012-07-12 09:51:45.156239 +02:00 "File[/var/log/puppet]": !ruby/sym checked: 2012-07-12 09:51:45.269008 +02:00 "File[/var/lib/puppet]": !ruby/sym checked: 2012-07-12 09:51:45.267087 +02:00 "File[/var/lib/puppet/state]": !ruby/sym checked: 2012-07-12 09:51:45.276414 +02:00 !ruby/sym synced: 2012-07-12 09:51:45.150469 +02:00 "File[/etc/puppet]": !ruby/sym checked: 2012-07-12 09:51:45.266200 +02:00 "File[/var/lib/puppet/ssl/public_keys]": !ruby/sym checked: 2012-07-12 09:51:45.272900 +02:00 !ruby/sym synced: 2012-07-12 09:51:45.144669 +02:00 "Class[Settings]": !ruby/sym checked: 2012-07-12 10:07:00.135979 +02:00 "File[/var/run/puppet]": !ruby/sym checked: 2012-07-12 09:51:45.275386 +02:00 !ruby/sym synced: 2012-07-12 09:51:45.147741 +02:00 "File[/var/lib/puppet/ssl]": !ruby/sym checked: 2012-07-12 09:51:45.270091 +02:00 !ruby/sym synced: 2012-07-12 09:51:45.140926 +02:00 "File[/var/lib/puppet/state/graphs]": !ruby/sym checked: 2012-07-12 09:51:46.765384 +02:00 !ruby/sym synced: 2012-07-12 09:51:45.151799 +02:00 "Filebucket[puppet]": !ruby/sym checked: 2012-07-12 10:07:00.137387 +02:00 "File[/var/lib/puppet/ssl/certs]": !ruby/sym checked: 2012-07-12 09:51:45.274179 +02:00 !ruby/sym synced: 2012-07-12 09:51:45.146368 +02:00
is there another step I need to follow to completely install glance like running a "puppet apply" or is it something else?
thanx a lot for your time, Narjisse