Closed malik-altaf closed 8 years ago
We have created an issue in Pivotal Tracker to manage this. You can view the current status of your issue at: https://www.pivotaltracker.com/story/show/115016935.
Thanks @malik-altaf the manifests look good so far. Could you also give me a redacted output of bosh task <TASK_ID> --debug
, where TASK_ID
is the task with which you deployed CF? The log is quite long, so you might want to put it into a gist or upload it somewhere else.
Seems like the metadata didn't get set correctly, so we should be able to find something in the log. Also, could you do a nova show
on one of the CF vms and paste the details here? I'm specifically interested in the metadata
part here.
Thanks!
Thanks @voelzmo, Please find the bosh director logs at public gist https://gist.github.com/malik-altaf/e947856aff54bf048676
The details of API vm are given below:
[ubuntu@bosh-cli my-bosh(keystone_cf-admin2)]$ nova show 162909db-4e9a-4ea9-a4f7-978e2e22807f
+--------------------------------------+----------------------------------------------------------------------------------+
| Property | Value |
+--------------------------------------+----------------------------------------------------------------------------------+
| OS-DCF:diskConfig | AUTO |
| OS-EXT-AZ:availability_zone | cloud-cf-az2 |
| OS-EXT-STS:power_state | 1 |
| OS-EXT-STS:task_state | - |
| OS-EXT-STS:vm_state | active |
| OS-SRV-USG:launched_at | 2016-02-23T06:36:31.000000 |
| OS-SRV-USG:terminated_at | - |
| accessIPv4 | |
| accessIPv6 | |
| cf2-net network | 192.168.1.3, 137.172.74.67 |
| config_drive | |
| created | 2016-02-23T06:36:21Z |
| flavor | m1.medium (19add635-0a87-41ea-96fa-3858074750d8) |
| hostId | 0c23c6d474e4639181057ca5725dd844d21a482086058c1dd30b9b44 |
| id | 162909db-4e9a-4ea9-a4f7-978e2e22807f |
| image | BOSH-6e550113-bf22-4859-a2da-a258761a0502 (21397150-c9d2-4a0e-b39e-82ebdb682c1b) |
| key_name | cf-keypair2 |
| metadata | {"director": "my-bosh", "index": "0", "job": "ha_proxy_z1", "deployment": "cf"} |
| name | vm-b99a3675-dc85-44d0-83cc-7a5e88e7572e |
| os-extended-volumes:volumes_attached | [] |
| progress | 0 |
| security_groups | bosh, cf-private, cf-public, default |
| status | ACTIVE |
| tenant_id | cf01b8d9b4f44b4a9fe694207cced4aa |
| updated | 2016-02-23T06:36:31Z |
| user_id | 7032b448d8f34bae9c62eec932570233 |
+--------------------------------------+----------------------------------------------------------------------------------+
Hi @malik-altaf
thanks for the additional logs. I cannot find a specific http call every installation of the OpenStack CPI in version >=23 should make, so I'm not sure if those VMs have really been created with the CPI in that version.
Furthermore, the OpenStack nova
output from above shows that the VM was created on 2016-02-23T06:36:21Z
. Did you create your Director before or after that date?
Hi @voelzmo , I created the bosh director on 2016/02/21 and I have bosh-init logs on that date showing CPI 23 and human_readable_vm_names: true
Hey @malik-altaf
I just realized that your Director manifest contains
properties:
director:
director.cpi_job: openstack_cpi
It actually should be
properties:
director:
cpi_job: openstack_cpi
as stated on bosh.io.
Right now, your Director uses the internal OpenStack CPI rubygem, which is deprecated and outdated.
Thanks @voelzmo, I'll try that out when I redeploy my cloud foundry and let you know if there is any issue.
Hi @voelzmo, I can confirm that after fixing the manifest file, VMs are being created with human readable names. Thanks for your help.
`I have used the property ' human_readable_vm_names: true' and created the director. The director VM is named correctly as bosh/0. However when I deploy the CF release, my VMs are still named the old way. Please have a look at the following output.
I have attached the bosh deployment manifest and cloud foundry manifest stub that I used for my deployments.
cf_redacted.yml.txt
bosh_redacted.yml.txt