Closed astrodb closed 4 months ago
current:
(openstack-config) [stack@sv-admin-0 openstack-config]$ openstack quota show vdfs
+----------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Field | Value |
+----------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| backup-gigabytes | -1 |
| backups | -1 |
| cores | 12 |
| fixed-ips | -1 |
| floating-ips | 2 |
| gigabytes | 368640 |
| gigabytes_ceph-hdd | -1 |
| gigabytes_ceph-ssd | 1000 |
| gigabytes_nvme | -1 |
| groups | 10 |
| injected-file-size | -1 |
| injected-files | -1 |
| injected-path-size | 255 |
| instances | 10 |
| key-pairs | -1 |
| location | Munch({'cloud': '', 'region_name': 'RegionOne', 'zone': None, 'project': Munch({'id': 'ff3e2de6a0b844d581bcd4335c18d2a4', 'name': 'admin', 'domain_id': None, 'domain_name': 'Default'})}) |
| networks | 100 |
| per-volume-gigabytes | -1 |
| ports | 500
|
| project | 5d1247c0315c40cfa13f71ac2fee0764
|
| project_name | VDFS
|
| properties | 128
|
| ram | 24576
|
| rbac_policies | 10
|
| routers | 10
|
| secgroup-rules | 100
|
| secgroups | 10
|
| server-group-members | 10
|
| server-groups | 10
|
| snapshots | 20
|
| snapshots_ceph-hdd | -1
|
| snapshots_ceph-ssd | -1
|
| snapshots_nvme | -1
|
| subnet_pools | -1
|
| subnets | 100
|
| volumes | 10
|
| volumes_ceph-hdd | -1
|
| volumes_ceph-ssd | -1
|
| volumes_nvme | -1
|
+----------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
currently automatic deployment blocked by ansible failing:
Greg Blow 41 minutes ago trying to apply quota changes using ansible and getting: TASK [stackhpc.openstack.os_projects : List OpenStack domains] ** fatal: [localhost]: FAILED! => {"changed": false, "cmd": ". /home/stack/kayobe/src/openstack-config/ansible/somerville-config-venv/bin/activate && openstack --os-project-domain-id='' --os-user-domain-id='' --os-project-domain-name='Default' --os-user-domain-name='Default' --os-project-id='' --os-project-name='admin' --os-username='admin' --os-password='
' --os-auth-url='http://10.19.3.200:5000/' --os-cacert='' --os-interface=admin domain list -f json -c Name -c ID\n", "delta": "0:00:01.439531", "end": "2024-07-19 13:06:33.656971", "msg": "non-zero return code", "rc": 1, "start": "2024-07-19 13:06:32.217440", "stderr": "admin endpoint for identity service in RegionOne region not found", "stderr_lines": ["admin endpoint for identity service in RegionOne region not found"], "stdout": "", "stdout_lines": []} (edited) 13 replies Mark Holliman 40 minutes ago login credential file is out of date?
Mark Holliman 37 minutes ago check you're in the right project?
Greg Blow 37 minutes ago I don't know. admin endpoint for identity service in Regionone region not found doesn't sound like a credentials error.
Mark Holliman 34 minutes ago Google search shows a lot of things all related to misconfigured env settings, or the service being offline
Mark Holliman 32 minutes ago Just checked, and keystone is listening on 10.19.3.200:5000, visible from admin node
Greg Blow 31 minutes ago references the venv at /home/stack/kayobe/src/openstack-config/ansible/somerville-config-venv/bin/activate
Greg Blow 30 minutes ago git repo there has image.png
image.png
Greg Blow 17 minutes ago changing os-interface from admin to internal causes the failed command to return properly, rather than erroring out.
Greg Blow 16 minutes ago --os-interface
Select an interface type. Valid interface types: [admin, public, internal]. default=public, (Env: OS_INTERFACE) Greg Blow 13 minutes ago (openstack-config) [stack@sv-admin-0 openstack-config]$ openstack endpoint list | grep keystone | 12a633c81bbd412692f80c4600cb3008 | RegionOne | keystone | identity | True | public | https://somerville.ed.ac.uk:5000/ | | ab49d5060d4746e8b8ce5af5cd0902d0 | RegionOne | keystone | identity | True | internal | http://10.19.3.200:5000/
Greg Blow 12 minutes ago https://bugs.launchpad.net/ceilometer/+bug/1981207
LaunchpadLaunchpad Bug #1981207 “Ceilometer depends on deprecated admin endpoint of...” : Bugs : Ceilometer Error occurs on the Devstack (Release Yoga) using the ceilometer plugin: --- local.conf --- ... enable_plugin ceilometer https://opendev.org/openstack/ceilometer stable/yoga ...
After investigating the ceilometer notification-agent log, keystoneauth1 does not found the admin entity
Show more
Greg Blow 11 minutes ago I think it may be a problem of trying to use the keystone endpoint with the admin interface.
Greg Blow 11 minutes ago (listing images with the admin interface works)
(openstack-config) [stack@sv-admin-0 openstack-config]$ openstack quota set --volumes=50 vdfs
(openstack-config) [stack@sv-admin-0 openstack-config]$ openstack quota set --floating-ips=4 vdfs
(openstack-config) [stack@sv-admin-0 openstack-config]$ openstack quota set --cores=24 vdfs
(openstack-config) [stack@sv-admin-0 openstack-config]$ openstack quota set --gigabytes=512000 vdfs
(openstack-config) [stack@sv-admin-0 openstack-config]$ openstack quota set --ram=98304 vdfs
(openstack-config) [stack@sv-admin-0 openstack-config]$ openstack quota show vdfs
+----------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Field | Value
|
+----------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| backup-gigabytes | -1
|
| backups | -1
|
| cores | 24
|
| fixed-ips | -1
|
| floating-ips | 4
|
| gigabytes | 512000
|
| gigabytes_ceph-hdd | -1
|
| gigabytes_ceph-ssd | 1000
|
| gigabytes_nvme | -1
|
| groups | 10
|
| injected-file-size | -1
|
| injected-files | -1
|
| injected-path-size | 255
|
| instances | 10
|
| key-pairs | -1
|
| location | Munch({'cloud': '', 'region_name': 'RegionOne', 'zone': None, 'project': Munch({'id': 'ff3e2de6a0b844d581bcd4335c18d2a4', 'name': 'admin', 'domain_id': None, 'domain_name': 'Default'})}) |
| networks | 100
|
| per-volume-gigabytes | -1
|
| ports | 500
|
| project | 5d1247c0315c40cfa13f71ac2fee0764
|
| project_name | VDFS
|
| properties | 128
|
| ram | 98304
|
| rbac_policies | 10
|
| routers | 10
|
| secgroup-rules | 100
|
| secgroups | 10
|
| server-group-members | 10
|
| server-groups | 10
|
| snapshots | 20
|
| snapshots_ceph-hdd | -1
|
| snapshots_ceph-ssd | -1
|
| snapshots_nvme | -1
|
| subnet_pools | -1
|
| subnets | 100
|
| volumes | 50
|
| volumes_ceph-hdd | -1
|
| volumes_ceph-ssd | -1
|
| volumes_nvme | -1
|
+----------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
Changes made. Please verify and close?
Looks good. Might want to open an issue on the broken Kayobe config deployment to track it.
With the new hardware money now confirmed, VDFS will need small increases to their quotas. Please update the VDFS project quotas to:
vCPUs: 24 RAM: 96GB Volumes: 50 Volume Storage: 500TB Floating IPs: 4