Closed harvesterhci-io-github-bot closed 3 months ago
As an update - so it's roughly figured out now how to provision openstack (via microstack) entirely automated at it's latest LTS release through the glory of Cloud-Init (so vastly better than juggling a mess of ansible scripts).
It is entirely dependent on a "Jammy" based Ubuntu image, Focal will not work with Microstack.
So it would be retrofitting something like:
@pytest.fixture(scope='class')
def user_data_with_guest_agent_and_openstack(keypair):
# set to root user password to 'linux' to test password login in
# addition to SSH login
yaml_data = """#cloud-config
chpasswd:
list: |
root:linux
expire: false
ssh_pwauth: true
users:
- name: root
ssh_authorized_keys:
- %s
package_update: true
packages:
- qemu-guest-agent
snap:
commands:
- snap install microstack --devmode --beta
runcmd:
- - systemctl
- enable
- '--now'
- qemu-ga
- microstack init --auto --control --setup-loop-based-cinder-lvm-backend --loop-device-file-size 50
""" % (keypair['spec']['publicKey'])
return yaml_data.replace('\n', '\\n')
With the cloud config of:
#cloud-config
package_update: true
packages:
- qemu-guest-agent
snap:
commands:
- snap install microstack --devmode --beta
- snap alias microstack.openstack openstack
- snap set microstack config.credentials.keystone-password=testtesttest
runcmd:
- - systemctl
- enable
- --now
- qemu-guest-agent.service
- echo fs.inotify.max_queued_events=1048576 | tee -a /etc/sysctl.conf
- echo fs.inotify.max_user_instances=1048576 | tee -a /etc/sysctl.conf
- echo fs.inotify.max_user_watches=1048576 | tee -a /etc/sysctl.conf
- echo vm.max_map_count=262144 | tee -a /etc/sysctl.conf
- echo vm.swappiness=1 | tee -a /etc/sysctl.conf
- sysctl -p
- microstack init --auto --control --setup-loop-based-cinder-lvm-backend
--loop-device-file-size 50
- snap restart microstack.cinder-{uwsgi,scheduler,volume}
ssh_authorized_keys:
- ssh-ed25519
AAAAC3NzaC1lZDI1NTE5AAAAIBzZT+yXkr28BJzki4WdisefgyR1hKMXWlJCd9KfajEm
michael.russell@suse.com
We'll need to "long-poll" until OpenStack is up, as it has nginx that has to provision ~20/25 min depending... But running it on Harvester is successful. ( which is exactly what we need - then we can acquire the openstack.rc programatically and then leverage openstack's python library to provision resources we need to stand up a VM on their then utilize that VM for importing later on )
All in all MicroStack seems to be the "easier" option in comparision to needing to juggle Docker/ContainerD on the VM for DevStack.
Ran into a microstack configuration error apparently the nginx conf doesn't enable a client body size of anything - so image uploads were entirely just failing.
fixed with:
sudo vi /var/snap/microstack/common/etc/nginx/snap/nginx.conf
and modifying:
client_max_body_size 0;
to:
client_max_body_size 4G;
(to allow for max of 4G image uploads)
then fire off a restart of the nginx service in openstack/microstack via the snap package:
sudo snap restart microstack.nginx
All of that can probably be done in cloud-init, on editing the microstack nginx conf ...we just need to do like some sort of sed
on it or something...
We'd need to upload an image that has the qemu-guest-tools so that when the vm gets imported into harvester, the ipv4 address will populate in the index page of the virtual machines.
Then an image can be uploaded, we would of course, in testing, leverage the openstack python sdk with the given info from openstack to do something similar to:
╭─mike at suse-workstation-team-harvester in ~/Documents/openstack
╰─○ openstack --insecure image create --disk-format vmdk --container-format bare --public --file ./jammy-server-cloudimg-amd64.vmdk jammy-image
+------------------+-------------------------------------------------------------------------------------------------------------------------------------------------+
| Field | Value |
+------------------+-------------------------------------------------------------------------------------------------------------------------------------------------+
| container_format | bare |
| created_at | 2023-07-27T19:37:27Z |
| disk_format | vmdk |
| file | /v2/images/dbf9ebaf-9dda-40bc-af7b-a6f2d205ee08/file |
| id | dbf9ebaf-9dda-40bc-af7b-a6f2d205ee08 |
| min_disk | 0 |
| min_ram | 0 |
| name | jammy-image |
| owner | 0e8c69478f6e44cdba78426069859718 |
| properties | os_hidden='False', owner_specified.openstack.md5='', owner_specified.openstack.object='images/jammy-image', owner_specified.openstack.sha256='' |
| protected | False |
| schema | /v2/schemas/image |
| status | queued |
| tags | |
| updated_at | 2023-07-27T19:37:27Z |
| visibility | public |
+------------------+-------------------------------------------------------------------------------------------------------------------------------------------------+
╭─mike at suse-workstation-team-harvester in ~/Documents/openstack
Additionally needing to reconfigure, automatic snapshots causing issues as trying to purge & re-install:
snap set system snapshots.automatic.retention=no
Setting it all to no, would be useful
Additionally bumping the swap size from 200M to higher:
snap set system swap.size=4096M
After some debugging it seems that 22.04 Ubuntu is not a stable release for OpenStack - so needing to install with an older release of Ubuntu... dang :/ :crying_cat_face:
tenantive new cloud-config:
#cloud-config
package_update: true
packages:
- qemu-guest-agent
- linux-modules-extra-5.8.0-63-generic
- build-essential
snap:
commands:
- snap install microstack --devmode --beta
- snap alias microstack.openstack openstack
- snap set microstack config.credentials.keystone-password=testtesttest
- snap set system swap.size=4096M
- snap set system snapshots.automatic.retention=no
runcmd:
- - systemctl
- enable
- --now
- qemu-guest-agent.service
- echo fs.inotify.max_queued_events=1048576 | tee -a /etc/sysctl.conf
- echo fs.inotify.max_user_instances=1048576 | tee -a /etc/sysctl.conf
- echo fs.inotify.max_user_watches=1048576 | tee -a /etc/sysctl.conf
- echo vm.max_map_count=262144 | tee -a /etc/sysctl.conf
- echo vm.swappiness=1 | tee -a /etc/sysctl.conf
- sysctl -p
- microstack init --auto --control --setup-loop-based-cinder-lvm-backend
--loop-device-file-size 100
- snap restart microstack.cinder-{uwsgi,scheduler,volume}
- sed -i 's/client_max_body_size 0/client_max_body_size 4G/g' /var/snap/microstack/common/etc/nginx/snap/nginx.conf
- snap restart microstack.nginx
ssh_authorized_keys:
- ssh-ed25519
AAAAC3NzaC1lZDI1NTE5AAAAIBzZT+yXkr28BJzki4WdisefgyR1hKMXWlJCd9KfajEm
michael.russell@suse.com
Needed to update, if on 20.04 - or probably jammy jellyfish would work too we need to:
TBD: the system might need to be reboot - running modprobe dm_thin_pool
then restarting all the cinder-* services.
Will need ca cert for system?
ubuntu@vm-for-openstack:~$ glance_ca_certificates_file = /var/snap/microstack/common/etc/ssl/certs/cacert.pem^C
ubuntu@vm-for-openstack:~$ suod cat /var/snap/microstack/common/etc/ssl/certs/cacert.pem
-bash: suod: command not found
ubuntu@vm-for-openstack:~$ sudo cat /var/snap/microstack/common/etc/ssl/certs/cacert.pem
-----BEGIN CERTIFICATE-----
may be worth checking the openstack source secret:
default behaviour is if no ca_cert
key is provided the controller will switch to skipping cert verification
snippet from controller code:
customCA, ok := secret.Data["ca_cert"]
if ok {
caCertPool := x509.NewCertPool()
caCertPool.AppendCertsFromPEM(customCA)
config.RootCAs = caCertPool
} else {
config.InsecureSkipVerify = true
}
Testing is tracked https://github.com/harvester/tests/issues/1171 also. @irishgordo could you check and duplicated close one of these?
@irishgordo Please close this if this is running fine on the staging stage. Create a new ticket to move that to prod Jenkins to track that part.
Based on:
I'll go ahead and close this out to track the testing implementation & the movement to prod of the Jenkins pipeline
What's the test to develop? Please describe
A clear and concise description of what the test you want to develop.
Prerequisite and dependency of test
Any prerequisite environment and pre-condition required for this test. Provide test case dependency here if any.
Describe the items of the test development (DoD, definition of done) you'd like
Additional context
Add any other context or screenshots about the test request here.
related issue: harvester/harvester#2274