kubernetes / kubernetes

Production-Grade Container Scheduling and Management
https://kubernetes.io
Apache License 2.0
109.95k stars 39.35k forks source link

Openstack blockstorage cinder v2 #39572

Closed mkutsevol closed 7 years ago

mkutsevol commented 7 years ago

What keywords did you search in Kubernetes issues before filing this one? (If you have found any duplicates, you should instead reply there.):

openstack cinder v2 PR (closed) https://github.com/kubernetes/kubernetes/pull/36976

Is this a BUG REPORT or FEATURE REQUEST? (choose one): FEATURE REQUEST

Kubernetes version (use kubectl version): 1.5.1

Environment:

What happened: Trying to use PV/PVC on openstack without blockstorage v1 api support fails. When the controller tries to satisfy the PVC and dynamically creates a cinder volume it successfully creates a volume but the return code is 202 because it is a v2 blockstorage api. And the create request is considered failed because the v1 api should return 201. It retries and creates more volumes until it hits volume quota on the tenant.

What you expected to happen: Create a volume and satisfy the PVC.

How to reproduce it (as minimally and precisely as possible): Try to use dynamic PV provisioning on openstack without blockstorage v1 api.

Anything else do we need to know:

mkutsevol commented 7 years ago

My proposal is to move from https://github.com/rackspace/gophercloud to https://github.com/gophercloud/gophercloud because of https://github.com/rackspace/gophercloud/issues/592

rackspace/gophercloud can't be removed due to rackspace provider. They state that they will also move it to a separate repo. Use the new library for cloudprovider/provider/openstack. It introduces minor API changes, it will need a review.

We can't drop support for v1 blockstorage yet, so we will need to probe for supported version or add config for that.

Please comment. If that sounds good, I'll push a PR for the first part - moving cloudprovider/provider/openstack to use the new library.

grahamhayes commented 7 years ago

@idvoretskyi @xsgordon can we get a sig/openstack label here ?

cc/ @kubernetes/sig-openstack-misc

grahamhayes commented 7 years ago

We should be doing this using feature detection / probing.

http://developer.openstack.org/api-ref/block-storage/v2/index.html?expanded=list-api-versions-detail gives the list of active API versions, so we can easily work out what is availible and use the right version.

This may need gophercloud work, or it might just need us to GET api-endpoint/ and parse the JSON. (Afaik most versions APIs in OpenStack do not require auth)

mkutsevol commented 7 years ago

@grahamhayes, thanks for your input. I've looked into lbaas code, will do something similar. With a config param that can override autodetection.

I want to split this work into two PRs, First I want to push a PR with library switch and without new features, just porting to the new library api. And only after it push the support for v2 blockstorage.

Is that the right way?

Quentin-M commented 7 years ago

@mkutsevol

grahamhayes commented 7 years ago

@mkutsevol Yeap, pulling in the 2 PRs above, and then doing an auto detection / config switch to tie it all together seems sane.

mkutsevol commented 7 years ago

@Quentin-M, thanks! Will search better next time :)

mkutsevol commented 7 years ago

@grahamhayes Ok, will pull #33944 and #36344

xsgordon commented 7 years ago

@idvoretskyi can we still get the sig/openstack label here please?

xrl commented 7 years ago

I have cherry-picked these changes on top of master, as of today, and I am working through building/deploying the work. I had to make some quick fixes for some bugs I found during integration testing:

--- a/pkg/cloudprovider/providers/openstack/openstack_volumes.go
+++ b/pkg/cloudprovider/providers/openstack/openstack_volumes.go
@@ -128,12 +128,13 @@ func (volumes *VolumesV1) getVolume(diskName string) (Volume, error) {
                return volume, err
        }

+       volume.ID = volume_v1.ID
+       volume.Name = volume_v1.Name
+       volume.Status = volume_v1.Status
+
        if len(volume_v1.Attachments) > 0 && volume_v1.Attachments[0]["server_id"] != nil {
                volume.AttachedServerId = volume_v1.Attachments[0]["server_id"].(string)
                volume.AttachedDevice = volume_v1.Attachments[0]["device"].(string)
-               volume.ID = volume_v1.ID
-               volume.Name = volume_v1.Name
-               volume.Status = volume_v1.Status
        }

        return volume, err
@@ -165,12 +166,13 @@ func (volumes *VolumesV2) getVolume(diskName string) (Volume, error) {
                return volume, err
        }

+       volume.ID = volume_v2.ID
+       volume.Name = volume_v2.Name
+       volume.Status = volume_v2.Status
+
        if len(volume_v2.Attachments) > 0 {
                volume.AttachedServerId = volume_v2.Attachments[0].ServerID
                volume.AttachedDevice = volume_v2.Attachments[0].Device
-               volume.ID = volume_v2.ID
-               volume.Name = volume_v2.Name
-               volume.Status = volume_v2.Status
        }
xrl commented 7 years ago

I am working through the build process and will report back how it actually works when deployed!

mkutsevol commented 7 years ago

@xrl hi! nice catch, you are right with the fix. I'm almost back from my sick leave, will pay attention to it. If it will still be needed :)

Did you cherry pick only the top commit from my branch or you've taken the commits from PRs #33944 and #36344 also?

xrl commented 7 years ago

I cherry picked every except for the change to do with Cinder W/O CP. I learned a lot while doing this work, thoughts were dumped here: https://tureus.github.io/devops/2017/01/24/build-your-custom-kubelet-image.html . My specific changes, very similar to yours, can be found here: https://github.com/tureus/kubernetes/tree/openstack-v2 .

I did not complete the work for v1/v2 autodiscovery. Let me know if you need help with that. Otherwise I will be testing out my first Kubernetes/Openstack cluster and making it meets my team's needs!