zioc / contrail-devstack-plugin

6 stars 21 forks source link

Can not able to instantiate VM in compute node in multinode setup #26

Open parulagrawal14 opened 7 years ago

parulagrawal14 commented 7 years ago

Hi,

In multinode set up with controller and compute node, we are not able to instantiate VM in Compute node. VM instantiation is successful in controller node. Kindly help us in resolving this issue.

raviprasad239 commented 7 years ago

what error you are getting ?

parulagrawal14 commented 7 years ago

Hi Ravi,

My controller node is up and I could able to instantiate VM. But when i attach a compute node and try to instantiate a VM, error comes in dashboard as "Unable to instantiate VM Error [Unknown]".

I have used master branch for both compute and controller node.

raviprasad239 commented 7 years ago

can you show local.conf file content for controller and compute node ?

parulagrawal14 commented 7 years ago

================================================================== Contents of local_compute.conf

[[local|localrc]]

ADMIN_PASSWORD=nomoresecret DATABASE_PASSWORD=stackdb RABBIT_PASSWORD=stackqueue SERVICE_PASSWORD=$ADMIN_PASSWORD

MULTI_HOST=1 CONTRAIL_BRANCH=R3.0 HOST_IP=10.1.0.228 SERVICE_HOST=10.1.0.226 CONFIG_IP=10.1.0.226 MYSQL_HOST=$SERVICE_HOST RABBIT_HOST=$SERVICE_HOST GLANCE_HOSTPORT=$SERVICE_HOST:9292 KEYSTONE_AUTH_HOST=$SERVICE_HOST KEYSTONE_SERVICE_HOST=$SERVICE_HOST

ENABLED_SERVICES=n-cpu,n-net,n-api-meta,c-vol disable_service api-srv disco svc-mon schema control collector analytic-api query-engine dns named ui-jobs ui-webs enable_service vrouter

NOVA_VNC_ENABLED=True NOVNCPROXY_URL="http://$SERVICE_HOST:6080/vnc_auto.html" VNCSERVER_LISTEN=$HOST_IP VNCSERVER_PROXYCLIENT_ADDRESS=$VNCSERVER_LISTEN

SWIFT_HASH=66a3d6b56c1f479c8b4e70ab5c2000f5

SWIFT_REPLICAS=1

SWIFT_DATA_DIR=$DEST/data enable_plugin contrail https://github.com/zioc/contrail-devstack-plugin.git

================================================================== Contents of local_controller.conf

[[local|localrc]]

ADMIN_PASSWORD=nomoresecret DATABASE_PASSWORD=stackdb RABBIT_PASSWORD=stackqueue SERVICE_PASSWORD=$ADMIN_PASSWORD

HOST_IP=10.1.0.226 SERVICE_HOST=10.1.0.226

MULTI_HOST=1 LOGFILE=$DEST/logs/stack.sh.log LOGDAYS=2

NOVNC_BRANCH=v0.6.0

SWIFT_HASH=66a3d6b56c1f479c8b4e70ab5c2000f5 SWIFT_REPLICAS=1 SWIFT_DATA_DIR=$DEST/data enable_plugin contrail https://github.com/zioc/contrail-devstack-plugin.git

raviprasad239 commented 7 years ago

can u please re-try with following changes in (lib/neutron-legacy):

function is_neutron_enabled { return 0 [[ ,${ENABLED_SERVICES} =~ ,"q-" ]] && return 0 return 1 }

parulagrawal14 commented 7 years ago

Hi Ravi,

I tried with the above changes but vrouter is not coming up. Below error is thrown

2017-01-19 Thu 00:07:27:397.289 CST compute [Thread 139897626900224, Pid 24550]: Current receive sock buffer size is 262144 2017-01-19 Thu 00:07:27:397.441 CST compute [Thread 139897622701824, Pid 24550]: KsyncTxQueue CPU pinning policy <>. KsyncTxQueuen not pinned to CPU 2017-01-19 Thu 00:07:27:397.453 CST compute [Thread 139897626900224, Pid 24550]: Vrouter family is 26 contrail-vrouter-agent: controller/src/vnsw/agent/vrouter/ksync/ksync_flow_memory.cc:145: void KSyncFlowMemory::InitFlowMem(): Assertion `flow_tablesize != 0' failed. vrouter failed to start

parulagrawal14 commented 7 years ago

Hi Ravi,

After changing lib/neutron-legacy file I just did ./unstack.sh and ./stack.sh. Do i need to do ./clean.sh also

raviprasad239 commented 7 years ago

I have same configuration(local.conf) as yours and it works well for me. you can try with "clean.sh". Anyways which kernel vs. you are using ?

parulagrawal14 commented 7 years ago

Kernel version used:

Linux compute 3.13.0-45-generic #74-Ubuntu SMP Tue Jan 13 19:36:28 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux

parulagrawal14 commented 7 years ago

Hi Ravi,

After reboot of machine I could able to start vrouter. But instance creation is failing

failed network setup after 1 attempt(s)^[[00m ^[[01;31m2017-01-19 01:58:05.640 TRACE nova.compute.manager ^[[01;35m^[[00mTraceback (most recent call last): ^[[01;31m2017-01-19 01:58:05.640 TRACE nova.compute.manager ^[[01;35m^[[00m File "/opt/stack/nova/nova/compute/manager.py", line 1570, in _allocate_network_async ^[[01;31m2017-01-19 01:58:05.640 TRACE nova.compute.manager ^[[01;35m^[[00m bind_host_id=bind_host_id) ^[[01;31m2017-01-19 01:58:05.640 TRACE nova.compute.manager ^[[01;35m^[[00m File "/opt/stack/nova/nova/network/api.py", line 49, in wrapped ^[[01;31m2017-01-19 01:58:05.640 TRACE nova.compute.manager ^[[01;35m^[[00m return func(self, context, *args, kwargs) ^[[01;31m2017-01-19 01:58:05.640 TRACE nova.compute.manager ^[[01;35m^[[00m File "/opt/stack/nova/nova/network/base_api.py", line 77, in wrapper ^[[01;31m2017-01-19 01:58:05.640 TRACE nova.compute.manager ^[[01;35m^[[00m res = f(self, context, args, kwargs) ^[[01;31m2017-01-19 01:58:05.640 TRACE nova.compute.manager ^[[01;35m^[[00m File "/opt/stack/nova/nova/network/api.py", line 283, in allocate_for_instance ^[[01;31m2017-01-19 01:58:05.640 TRACE nova.compute.manager ^[[01;35m^[[00m nw_info = self.network_rpcapi.allocate_for_instance(context, args) ^[[01;31m2017-01-19 01:58:05.640 TRACE nova.compute.manager ^[[01;35m^[[00m File "/opt/stack/nova/nova/network/rpcapi.py", line 163, in allocate_for_instance ^[[01;31m2017-01-19 01:58:05.640 TRACE nova.compute.manager ^[[01;35m^[[00m macs=jsonutils.to_primitive(macs)) ^[[01;31m2017-01-19 01:58:05.640 TRACE nova.compute.manager ^[[01;35m^[[00m File "/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/client.py", line 158, in call ^[[01;31m2017-01-19 01:58:05.640 TRACE nova.compute.manager ^[[01;35m^[[00m retry=self.retry) ^[[01;31m2017-01-19 01:58:05.640 TRACE nova.compute.manager ^[[01;35m^[[00m File "/usr/local/lib/python2.7/dist-packages/oslo_messaging/transport.py", line 90, in _send ^[[01;31m2017-01-19 01:58:05.640 TRACE nova.compute.manager ^[[01;35m^[[00m timeout=timeout, retry=retry) ^[[01;31m2017-01-19 01:58:05.640 TRACE nova.compute.manager ^[[01;35m^[[00m File "/usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py", line 470, in send ^[[01;31m2017-01-19 01:58:05.640 TRACE nova.compute.manager ^[[01;35m^[[00m result = self._waiter.wait(msg_id, timeout) ^[[01;31m2017-01-19 01:58:05.640 TRACE nova.compute.manager ^[[01;35m^[[00m File "/usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py", line 342, in wait ^[[01;31m2017-01-19 01:58:05.640 TRACE nova.compute.manager ^[[01;35m^[[00m message = self.waiters.get(msg_id, timeout=timeout) ^[[01;31m2017-01-19 01:58:05.640 TRACE nova.compute.manager ^[[01;35m^[[00m File "/usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py", line 244, in get ^[[01;31m2017-01-19 01:58:05.640 TRACE nova.compute.manager ^[[01;35m^[[00m 'to message ID %s' % msg_id) ^[[01;31m2017-01-19 01:58:05.640 TRACE nova.compute.manager ^[[01;35m^[[00mMessagingTimeout: Timed out waiting for a reply to message ID 36ac31ba1dd348f7a9469e7f57584244 ^[[01;31m2017-01-19 01:58:05.640 TRACE nova.compute.manager ^[[01;35m^[[00m Traceback (most recent call last): File "/usr/local/lib/python2.7/dist-packages/eventlet/hubs/hub.py", line 457, in fire_timers timer() File "/usr/local/lib/python2.7/dist-packages/eventlet/hubs/timer.py", line 58, in call cb(args, kw) File "/usr/local/lib/python2.7/dist-packages/eventlet/greenthread.py", line 214, in main result = function(*args, kwargs) File "/opt/stack/nova/nova/utils.py", line 1159, in context_wrapper return func(*args, kwargs) File "/opt/stack/nova/nova/compute/manager.py", line 1587, in _allocate_network_async six.reraise(exc_info) File "/opt/stack/nova/nova/compute/manager.py", line 1570, in _allocate_network_async bind_host_id=bind_host_id) File "/opt/stack/nova/nova/network/api.py", line 49, in wrapped return func(self, context, args, kwargs) File "/opt/stack/nova/nova/network/base_api.py", line 77, in wrapper res = f(self, context, *args, kwargs) File "/opt/stack/nova/nova/network/api.py", line 283, in allocate_for_instance nw_info = self.network_rpcapi.allocate_for_instance(context, **args) File "/opt/stack/nova/nova/network/rpcapi.py", line 163, in allocate_for_instance macs=jsonutils.to_primitive(macs)) File "/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/client.py", line 158, in call retry=self.retry) File "/usr/local/lib/python2.7/dist-packages/oslo_messaging/transport.py", line 90, in _send timeout=timeout, retry=retry) File "/usr/local/lib/python2.7/dist-packages/oslo_messaging/transport.py", line 90, in _send timeout=timeout, retry=retry) File "/usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py", line 470, in send retry=retry) File "/usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py", line 459, in _send result = self._waiter.wait(msg_id, timeout) File "/usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py", line 342, in wait message = self.waiters.get(msg_id, timeout=timeout) File "/usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py", line 244, in get 'to message ID %s' % msg_id) MessagingTimeout: Timed out waiting for a reply to message ID 36ac31ba1dd348f7a9469e7f57584244

I am using master branch for both compute and controller node and in compute node only vrouter is started.

raviprasad239 commented 7 years ago

But from your local.conf looks like you are using R3.0 contrail branch in compute(however i have same in my compute node and it works). And I hope you do not have n/w issue (controller and compute are reachable from each other). You can check log of controller and find out if they are able to communicate. Also you can check contrail GUI for status of vRouters. If default management port in your compute node is not 'eth0' then you can use "VHOST_INTERFACE_NAME=em1" to change it (Generally ubuntu will have interface name like em1,em2 etc.)

parulagrawal14 commented 7 years ago

I have commented that line "CONTRAIL_BRANCH=R3.0" in conf file of compute node. So i am using master branch for controller and compute. Recently few check-ins has happened because of which compilation issue was coming with release 3.0.

IN master branch also do i need to change lib/neutron-legacy file changes.

raviprasad239 commented 7 years ago

I had used lib/neutron-legacy file changes in R3.0 branch. I have not tried this with master branch.

parulagrawal14 commented 7 years ago

Anyone has tried multinode setup with master branch in both compute and controller node.

ethuleau commented 7 years ago

I tried but not recently

ldurandadomia commented 7 years ago

Hi,

I'm trying to add a compute node using following configuration file :

[[local|localrc]] HOST_IP=192.168.37.197 SERVICE_HOST=192.168.37.194 MYSQL_HOST=$SERVICE_HOST RABBIT_HOST=$SERVICE_HOST GLANCE_HOSTPORT=$SERVICE_HOST:9292

ADMIN_PASSWORD=contrail123 MYSQL_PASSWORD=$ADMIN_PASSWORD RABBIT_PASSWORD=$ADMIN_PASSWORD SERVICE_PASSWORD=$ADMIN_PASSWORD SERVICE_TOKEN=$ADMIN_PASSWORD DATABASE_PASSWORD=$ADMIN_PASSWORD

GIT_BASE=${GIT_BASE:-https://git.openstack.org}

DEST=/opt/stack/openstack CONTRAIL_DEST=/opt/stack/contrail CONTRAIL_PATCHES="sed -i '/container/d' /opt/stack/contrail/controller/src/SConscript"

FORCE=yes RECLONE=False VERBOSE=True LOGFILE=/opt/stack/openstack/logs/stack.sh.log USE_SYSTEMD=False USE_SCREEN=True

IP_VERSION=4

disable_all_services ENABLED_SERVICES=n-cpu,rabbit,n-novnc,placement-client

NOVA_VNC_ENABLED=True NOVNCPROXY_URL=http://$SERVICE_HOST:6080/vnc_auto.html VNCSERVER_LISTEN=0.0.0.0 VNCSERVER_PROXYCLIENT_ADDRESS=$HOST_IP

INSTALL_PROFILE=COMPUTE COMPUTE_HOST_IP=$HOST_IP

Q_PLUGIN=opencontrail enable_plugin contrail https://github.com/zioc/contrail-devstack-plugin.git disable_service ui-webs ui-jobs named dns query-engine api-srv disco svc-mon schema control collector analytics-api enable_service vrouter

SCONS_JOBS=1

I'm facing with following error : 2017-06-11 16:34:43.952 | /usr/bin/ld: cannot find -lsasl2 2017-06-11 16:34:43.953 | collect2: error: ld returned 1 exit status 2017-06-11 16:34:43.957 | scons: *** [build/production/analytics/vizd] Error 1 2017-06-11 16:34:44.323 | scons: building terminated because of errors.

Best regards. Laurent DURAND

ldurandadomia commented 7 years ago

Hi,

I've made some progress. I've used following local.conf on compute node :

[[local|localrc]] HOST_IP=192.168.37.197 SERVICE_HOST=192.168.37.194 MYSQL_HOST=$SERVICE_HOST RABBIT_HOST=$SERVICE_HOST GLANCE_HOSTPORT=$SERVICE_HOST:9292

ADMIN_PASSWORD=contrail123 MYSQL_PASSWORD=$ADMIN_PASSWORD RABBIT_PASSWORD=$ADMIN_PASSWORD SERVICE_PASSWORD=$ADMIN_PASSWORD SERVICE_TOKEN=$ADMIN_PASSWORD DATABASE_PASSWORD=$ADMIN_PASSWORD

GIT_BASE=${GIT_BASE:-https://git.openstack.org}

DEST=/opt/stack/openstack CONTRAIL_DEST=/opt/stack/contrail CONTRAIL_PATCHES="sed -i '/container/d' /opt/stack/contrail/controller/src/SConscript"

FORCE=yes RECLONE=False VERBOSE=True LOGFILE=/opt/stack/openstack/logs/stack.sh.log USE_SYSTEMD=False USE_SCREEN=True

IP_VERSION=4 ENABLED_SERVICES=n-cpu,rabbit,n-novnc,placement-client

NOVA_VNC_ENABLED=True NOVNCPROXY_URL=http://$SERVICE_HOST:6080/vnc_auto.html VNCSERVER_LISTEN=0.0.0.0 VNCSERVER_PROXYCLIENT_ADDRESS=$HOST_IP

INSTALL_PROFILE=COMPUTE COMPUTE_HOST_IP=$HOST_IP Q_META_DATA_IP=$SERVICE_HOST Q_PLUGIN=opencontrail enable_plugin contrail https://github.com/zioc/contrail-devstack-plugin.git collector analytics-api enable_service vrouter

SCONS_JOBS=1

With this configuration file the compute node is built successfully.

But, when I'm trying to launch an instance on the new node I get following error message : Error: Failed to perform requested operation on instance "VM-HST", the instance has an error status: Please try again later [Error: Build of instance 4d3e5ae8-1504-40db-b262-49bb337b6cce aborted: Unable to establish connection to http://127.0.0.1:9696/v2.0/networks.json?id=fd3ff0f4-d23e-4c93-abf3-497d23d2fa69: HTTPConnectionPool(host='127.0.0.1', port=9696): Max retries exceeded with ].

After some investigation I've found that /etc/nova/nova.conf is not fully fulfilled. Neutron section has following infos onto new compute node : [neutron] service_metadata_proxy = True metadata_proxy_shared_secret = metadatasecret

On the controller the configuration is : [neutron] url = http://192.168.37.100:9696 region_name = RegionOne auth_strategy = keystone project_domain_name = Default project_name = service user_domain_name = Default password = contrail123 username = neutron auth_url = http://192.168.37.100/identity_admin/v3 auth_type = password service_metadata_proxy = True metadata_proxy_shared_secret = metadatasecret

So, I've updated infos into Neutron section onto compute node with the same infos as on controller node. Then I've restarted Nova onto the compute node.

After this change I'm now able to provision an instance onto my new compute node. My new instance connectivity is well managed by OpenContrail vrouter:

$ sudo virsh domiflist 1 Interface Type Source Model MAC tap0af6ffea-8c ethernet - virtio 02:0a:f6:ff:ea:8c

$ sudo vif --list Vrouter Interface Table

Flags: P=Policy, X=Cross Connect, S=Service Chain, Mr=Receive Mirror Mt=Transmit Mirror, Tc=Transmit Checksum Offload, L3=Layer 3, L2=Layer 2 D=DHCP, Vp=Vhost Physical, Pr=Promiscuous, Vnt=Native Vlan Tagged Mnp=No MAC Proxy, Dpdk=DPDK PMD Interface, Rfl=Receive Filtering Offload, Mon=Interface is Monitored Uuf=Unknown Unicast Flood, Vof=VLAN insert/strip offload, Df=Drop New Flows, L=MAC Learning Enabled Proxy=MAC Requests Proxied Always, Er=Etree Root

vif0/0 OS: ens33 (Speed 1000, Duplex 1) Type:Physical HWaddr:00:0c:29:1f:7f:26 IPaddr:0 Vrf:0 Flags:TcL3L2VpEr QOS:-1 Ref:5 RX packets:21170 bytes:16570387 errors:0 TX packets:21308 bytes:16819742 errors:0 Drops:0

vif0/1 OS: vhost0 Type:Host HWaddr:00:0c:29:1f:7f:26 IPaddr:c0a825c5 Vrf:0 Flags:L3L2Er QOS:-1 Ref:3 RX packets:21348 bytes:16825491 errors:0 TX packets:21295 bytes:16581589 errors:0 Drops:0

vif0/2 OS: pkt0 Type:Agent HWaddr:00:00:5e:00:01:00 IPaddr:0 Vrf:65535 Flags:L3Er QOS:-1 Ref:3 RX packets:319 bytes:30079 errors:0 TX packets:696 bytes:69572 errors:0 Drops:27

vif0/3 OS: vgw Type:Gateway HWaddr:00:00:5e:00:01:00 IPaddr:0 Vrf:1 Flags:L3L2Er QOS:-1 Ref:2 RX packets:35 bytes:5381 errors:35 TX packets:0 bytes:0 errors:0 Drops:35

vif0/4 OS: tap0af6ffea-8c Type:Virtual HWaddr:00:00:5e:00:01:00 IPaddr:0 Vrf:2 Flags:PL3L2DEr QOS:-1 Ref:5 RX packets:360 bytes:24373 errors:0 TX packets:335 bytes:19149 errors:0 Drops:25

vif0/4350 OS: pkt3 Type:Stats HWaddr:00:00:00:00:00:00 IPaddr:0 Vrf:65535 Flags:L3L2 QOS:0 Ref:1 RX packets:0 bytes:0 errors:0 TX packets:0 bytes:0 errors:0 Drops:0

vif0/4351 OS: pkt1 Type:Stats HWaddr:00:00:00:00:00:00 IPaddr:0 Vrf:65535 Flags:L3L2 QOS:0 Ref:1 RX packets:0 bytes:0 errors:0 TX packets:0 bytes:0 errors:0 Drops:0

Now, I just have to find what is the missing "local.conf" parameter that lead to a misconfiguration of Neutron section into nova conf file.

Regards.

ldurandadomia commented 7 years ago

Hello,

I'm working on multi-node setup with Devstak Ocata/Contrail R4.0 on Ubuntu 16.04 64 bit desktop. Here I'm describing the "compute node server" building.

Compute Server intial setup:

1°) Manage DNS "static" configuration into /etc/resolv.conf in order to avoid to loose DNS config when interface is migrated on VHOST Create "tail" file into resolv.conf.d : $ sudo vi /etc/resolvconf/resolv.conf.d/tail

Add following line nameserver 8.8.8.8

Rebuild resolv.conf file : $ sudo resolvconf --enable-updates $ sudo resolvconf -u

2°) Manage /opt/stack access right : I'm creating /opt/stack with proper access right in order to avoid some subdirectory creation issue during stack building. $ sudo mkdir -p /opt/stack $ sudo chmod 777 /opt/stack

3°) Configure IP address, servername in localhosts : I'm fullfilling /etc/hostname with a distinct servername from controller : ubuntu-hst

I'm configuring the IP interface in static mode (with GUI). Then I'm managing /etc/hosts configuration : 127.0.0.1 localhost ubuntu-hst 127.0.1.1 ubuntu-hst 192.168.37.100 ubuntu-ctl 192.168.37.198 ubuntu-hst

4°) Devstack source cloning $ git clone https://github.com/openstack-dev/devstack.git -b stable/ocata $ cd devstack

Then we have to create local.conf file. I'm using following configuration on compute node : [[local|localrc]] HOST_IP=192.168.37.198 SERVICE_HOST=192.168.37.100 MYSQL_HOST=$SERVICE_HOST RABBIT_HOST=$SERVICE_HOST GLANCE_HOSTPORT=$SERVICE_HOST:9292

ADMIN_PASSWORD=contrail123 MYSQL_PASSWORD=$ADMIN_PASSWORD RABBIT_PASSWORD=$ADMIN_PASSWORD SERVICE_PASSWORD=$ADMIN_PASSWORD SERVICE_TOKEN=$ADMIN_PASSWORD DATABASE_PASSWORD=$ADMIN_PASSWORD

GIT_BASE=${GIT_BASE:-https://git.openstack.org} DEST=/opt/stack/openstack

CONTRAIL_REPO_PROTO=https CONTRAIL_DEST=/opt/stack/contrail CONTRAIL_PATCHES="sed -i '/container/d' /opt/stack/contrail/controller/src/SConscript"

FORCE=yes RECLONE=False VERBOSE=True LOGFILE=/opt/stack/openstack/logs/stack.sh.log USE_SYSTEMD=False USE_SCREEN=True

IP_VERSION=4

ENABLED_SERVICES=n-cpu,rabbit,n-novnc,placement-client,neutron

NOVA_VNC_ENABLED=True NOVNCPROXY_URL=http://$SERVICE_HOST:6080/vnc_auto.html VNCSERVER_LISTEN=0.0.0.0 VNCSERVER_PROXYCLIENT_ADDRESS=$HOST_IP

Q_PLUGIN=opencontrail enable_plugin contrail https://github.com/zioc/contrail-devstack-plugin.git

INSTALL_PROFILE=COMPUTE COMPUTE_HOST_IP=$HOST_IP enable_service vrouter

SCONS_JOBS=1

5°) In order to avoid issue with "lib crypto" : I'm adding following libraries : $ sudo apt-get install libssl-dev libsasl2-dev liblz4-dev

If not, some built are failing with following error : 2017-06-13 16:30:18.856 | /usr/bin/ld: cannot find -lsasl2 2017-06-13 16:30:18.856 | collect2: error: ld returned 1 exit status 2017-06-13 16:30:18.862 | scons: *** [build/production/analytics/vizd] Error 1 2017-06-13 16:30:19.328 | scons: building terminated because of errors. 2017-06-13 16:30:20.659 | +/opt/stack/openstack/contrail/devstack/plugin.sh:source:1 exit_trap

6°) I'm launching the first built : ./stack.sh

ldurandadomia commented 7 years ago

1°) First Issue After several minutes following error occurs : 2017-06-14 07:06:44.700 | In file included from controller/src/database/cassandra/cql/cql_if.h:15:0, 2017-06-14 07:06:44.700 | from controller/src/ifmap/ifmap_factory.cc:19: 2017-06-14 07:06:44.700 | controller/src/database/cassandra/cql/cql_lib_if.h:8:23: fatal error: cassandra.h: No such file or directory 2017-06-14 07:06:44.700 | compilation terminated. 2017-06-14 07:06:44.707 | scons: *** [build/production/ifmap/ifmap_factory.o] Error 1 2017-06-14 07:06:45.619 | scons: building terminated because of errors.

Then I'm modifying /opt/stack/openstack/contrail/devstack/plugin.sh script : After following line : if is_service_enabled vrouter; then I'm adding following lines : install_cassandra install_cassandra_cpp_driver just before following line : echo_summary "Building contrail vrouter"

Then I"m restarting the build : $ ./stack.sh

2°) Second issue Build is failing with following error : 2017-06-01 10:41:30.336 | dpkg: error processing package cassandra-cpp-driver (--install): 2017-06-01 10:41:30.336 | dpkg: error processing package cassandra-cpp-driver-dev (--install): 2017-06-01 10:41:30.430 | libuv_1.8.0-1_amd64.deb

Then, I'm manualy installing libuv_1.8.0-1_amd64.deb : $ wget http://downloads.datastax.com/cpp-driver/ubuntu/16.04/dependenices/libuv/v1.8.0/libuv_1.8.0-1_amd64.deb $ sudo dpkg -i --force-overwrite libuv_1.8.0-1_amd64.deb $ sudo apt-get install -f

I'm restarting the build : $ ./stack.sh

And after several minutes it should finish successfully.

ldurandadomia commented 7 years ago

After compute node successful build, we have to connect onto the control node and to run : $ nova-manage cell_v2 discover_hosts $ cd ~/devstack $ source openrc admin admin $ nova service-list +----+------------------+------------+----------+---------+-------+----------------------------+-----------------+ | Id | Binary | Host | Zone | Status | State | Updated_at | Disabled Reason | +----+------------------+------------+----------+---------+-------+----------------------------+-----------------+ | 3 | nova-conductor | ubuntu | internal | enabled | up | 2016-09-09T06:41:08.000000 | - | | 4 | nova-scheduler | ubuntu | internal | enabled | up | 2016-09-09T06:41:02.000000 | - | | 5 | nova-consoleauth | ubuntu | internal | enabled | up | 2016-09-09T06:41:02.000000 | - | | 6 | nova-compute | ubuntu | nova | enabled | up | 2016-09-09T06:41:02.000000 | - | | 7 | nova-compute | ubuntu-hst | nova | enabled | up | 2016-09-09T06:41:06.000000 | - | +----+------------------+------------+----------+---------+-------+----------------------------+-----------------+

The new compute node should appear into the list (here in line 7). Now, it should be ready for use.