Closed mkutsevol closed 7 years ago
My proposal is to move from https://github.com/rackspace/gophercloud to https://github.com/gophercloud/gophercloud because of https://github.com/rackspace/gophercloud/issues/592
rackspace/gophercloud can't be removed due to rackspace provider. They state that they will also move it to a separate repo. Use the new library for cloudprovider/provider/openstack. It introduces minor API changes, it will need a review.
We can't drop support for v1 blockstorage yet, so we will need to probe for supported version or add config for that.
Please comment. If that sounds good, I'll push a PR for the first part - moving cloudprovider/provider/openstack to use the new library.
@idvoretskyi @xsgordon can we get a sig/openstack label here ?
cc/ @kubernetes/sig-openstack-misc
We should be doing this using feature detection / probing.
http://developer.openstack.org/api-ref/block-storage/v2/index.html?expanded=list-api-versions-detail gives the list of active API versions, so we can easily work out what is availible and use the right version.
This may need gophercloud work, or it might just need us to GET api-endpoint/
and parse the JSON. (Afaik most versions APIs in OpenStack do not require auth)
@grahamhayes, thanks for your input. I've looked into lbaas code, will do something similar. With a config param that can override autodetection.
I want to split this work into two PRs, First I want to push a PR with library switch and without new features, just porting to the new library api. And only after it push the support for v2 blockstorage.
Is that the right way?
@mkutsevol
@mkutsevol Yeap, pulling in the 2 PRs above, and then doing an auto detection / config switch to tie it all together seems sane.
@Quentin-M, thanks! Will search better next time :)
@grahamhayes Ok, will pull #33944 and #36344
@idvoretskyi can we still get the sig/openstack label here please?
I have cherry-picked these changes on top of master, as of today, and I am working through building/deploying the work. I had to make some quick fixes for some bugs I found during integration testing:
--- a/pkg/cloudprovider/providers/openstack/openstack_volumes.go
+++ b/pkg/cloudprovider/providers/openstack/openstack_volumes.go
@@ -128,12 +128,13 @@ func (volumes *VolumesV1) getVolume(diskName string) (Volume, error) {
return volume, err
}
+ volume.ID = volume_v1.ID
+ volume.Name = volume_v1.Name
+ volume.Status = volume_v1.Status
+
if len(volume_v1.Attachments) > 0 && volume_v1.Attachments[0]["server_id"] != nil {
volume.AttachedServerId = volume_v1.Attachments[0]["server_id"].(string)
volume.AttachedDevice = volume_v1.Attachments[0]["device"].(string)
- volume.ID = volume_v1.ID
- volume.Name = volume_v1.Name
- volume.Status = volume_v1.Status
}
return volume, err
@@ -165,12 +166,13 @@ func (volumes *VolumesV2) getVolume(diskName string) (Volume, error) {
return volume, err
}
+ volume.ID = volume_v2.ID
+ volume.Name = volume_v2.Name
+ volume.Status = volume_v2.Status
+
if len(volume_v2.Attachments) > 0 {
volume.AttachedServerId = volume_v2.Attachments[0].ServerID
volume.AttachedDevice = volume_v2.Attachments[0].Device
- volume.ID = volume_v2.ID
- volume.Name = volume_v2.Name
- volume.Status = volume_v2.Status
}
I am working through the build process and will report back how it actually works when deployed!
@xrl hi! nice catch, you are right with the fix. I'm almost back from my sick leave, will pay attention to it. If it will still be needed :)
Did you cherry pick only the top commit from my branch or you've taken the commits from PRs #33944 and #36344 also?
I cherry picked every except for the change to do with Cinder W/O CP
. I learned a lot while doing this work, thoughts were dumped here: https://tureus.github.io/devops/2017/01/24/build-your-custom-kubelet-image.html . My specific changes, very similar to yours, can be found here: https://github.com/tureus/kubernetes/tree/openstack-v2 .
I did not complete the work for v1/v2 autodiscovery. Let me know if you need help with that. Otherwise I will be testing out my first Kubernetes/Openstack cluster and making it meets my team's needs!
What keywords did you search in Kubernetes issues before filing this one? (If you have found any duplicates, you should instead reply there.):
openstack cinder v2 PR (closed) https://github.com/kubernetes/kubernetes/pull/36976
Is this a BUG REPORT or FEATURE REQUEST? (choose one): FEATURE REQUEST
Kubernetes version (use
kubectl version
): 1.5.1Environment:
uname -a
): 4.7.3-coreos-r3What happened: Trying to use PV/PVC on openstack without blockstorage v1 api support fails. When the controller tries to satisfy the PVC and dynamically creates a cinder volume it successfully creates a volume but the return code is 202 because it is a v2 blockstorage api. And the create request is considered failed because the v1 api should return 201. It retries and creates more volumes until it hits volume quota on the tenant.
What you expected to happen: Create a volume and satisfy the PVC.
How to reproduce it (as minimally and precisely as possible): Try to use dynamic PV provisioning on openstack without blockstorage v1 api.
Anything else do we need to know: