fog / fog-openstack

Fog for OpenStack Platform
MIT License
69 stars 130 forks source link

Adding volume v3 supported #518

Closed NormandLemay closed 2 years ago

NormandLemay commented 3 years ago

Version 2 will no longer be supported as of the Wallaby version https://wiki.openstack.org/wiki/CinderWallabyMidCycleSummary#how_much_v2_code_must_be_removed_from_Cinder_in_Wallaby.3F

NormandLemay commented 3 years ago

New pull request @dhague , @Ladas, @seanhandley, @mdarby, @jjasghar

mephmanx commented 3 years ago

Have you ever gotten anyone to respond to this? I am having the same issue now that xena has been released on kolla ansible openstack. How do you install your changes to get picked up by bbl, if you use it?

mephmanx commented 3 years ago

It makes me feel like this project has been abandoned since a change like this has sat around for this long.

dhague commented 3 years ago

Apologies from my side - I changed role a few years ago and haven't been working with OpenStack for some time. I will leave the admin team now to avoid any future confusion.

mephmanx commented 3 years ago

Understandable, sorry to see you go….is there anyone left on the admin team? There are a lot of issues and PR’s that haven’t been looked at in a long time…some of which are now going to be needed. Who can I contact?

From: Darren Hague @.> Date: Friday, November 26, 2021 at 3:44 PM To: fog/fog-openstack @.> Cc: Chris Lyons @.>, Comment @.> Subject: Re: [fog/fog-openstack] Adding volume v3 supported (#518)

Apologies from my side - I changed role a few years ago and haven't been working with OpenStack for some time. I will leave the admin team now to avoid any future confusion.

— You are receiving this because you commented. Reply to this email directly, view it on GitHubhttps://github.com/fog/fog-openstack/pull/518#issuecomment-980359953, or unsubscribehttps://github.com/notifications/unsubscribe-auth/AMNAQ2UW2ZX25NQZQCFUCIDUN7PQLANCNFSM5BC5RIHA.

This email has been scanned by Inbound Shield™.

ares commented 2 years ago

I have the merge and hopefully a release permission however zero time to maintain this gem. @lzap do you have time and dev setup to check this?

mephmanx commented 2 years ago

I know its not my branch but let me know if there is anything I can do to help. I need this compatibility so if there is anything to get it in master. I cant deploy Xena until this is merged.

From: Marek Hulán @.> Date: Monday, November 29, 2021 at 11:50 AM To: fog/fog-openstack @.> Cc: Chris Lyons @.>, Comment @.> Subject: Re: [fog/fog-openstack] Adding volume v3 supported (#518)

I have the merge and hopefully a release permission however zero time to maintain this gem. @lzaphttps://github.com/lzap do you have time and dev setup to check this?

— You are receiving this because you commented. Reply to this email directly, view it on GitHubhttps://github.com/fog/fog-openstack/pull/518#issuecomment-981818401, or unsubscribehttps://github.com/notifications/unsubscribe-auth/AMNAQ2TFOV7FH5O3E2GXII3UOOVMPANCNFSM5BC5RIHA.

This email has been scanned by Inbound Shield™.

mephmanx commented 2 years ago

Offering to help.... Hoping I can get this merged/pushed so I can move to Xena soon...

ShamoX commented 2 years ago

Hello, what are the tests needed ?

I have an account to an open stack provider (OVH) and use this lib. Point me where the tests to do are documented and I'll do it.

I don't use volumes, so I don't really know if my provider updated its cluster to that version.

mephmanx commented 2 years ago

Im not sure what tests are needed... I have a private on-prem openstack server currently running wallaby but looking to upgrade to Xena but with cinderv2 removed completely I cannot deploy cloudfoundry on xena. This change is critical to being able to deploy as cloudfoundry requires BOSH and BOSH requires this toolkit to deploy on openstack. I am owner of on-prem hardware so I could run anything needed for testing.

lzap commented 2 years ago

@lzap do you have time and dev setup to check this?

I wish. Passing this on to @ezr-ondrej as their team now owns CRs.

mephmanx commented 2 years ago

Any update to this? I would really like to move to Xena. The code is here, can it be merged?

mephmanx commented 2 years ago

Anyone available to merge this PR?

mephmanx commented 2 years ago

Anyone available?

ShamoX commented 2 years ago

@mdarby ?, @geemus , @Ladas , @ares, @gildub, @jjasghar, @kowsalyapalaniappan since you own the gem in rubygems, you have to respond to these calls (and the 9 other PRs and the 90-ish issues).

You also can propose or ask for someone to be proprietary also (adding him/her the right to review/merge and be responsible to the upgrade of the projet).

I think you can safely ask to any PR proposer if they want to get those rights.

Else I suggest that a new gem should be a fork of that one (I dislike this idea but freezing all devs is worst in my point of view).

ares commented 2 years ago

Hello,

first of all, sorry for such long inactivity.

We will try to spin up an internal openstack instance to test this with. If no regressions are found, I'm happy to merge and release a new version. We'll also take a look at other PRs and see what can make it in. At the same time, as you noticed, this projects needs other active maintainers. If there's anyone interested in active maintainership, we'll be happy to enlarge the maintainers team. A good start would be to perform PR reviews and testing of existing PRs. I'm then happy to go through the release process with such person.

geemus commented 2 years ago

@ares Thanks! I don't really have any of the setup/access any more to do this myself, unfortunately.

ShamoX commented 2 years ago

first of all, sorry for such long inactivity.

This happens, the more important things is to unlock things like you do :)

We will try to spin up an internal openstack instance to test this with. If no regressions are found, I'm happy to merge and release a new version. We'll also take a look at other PRs and see what can make it in. At the same time, as you noticed, this projects needs other active maintainers. If there's anyone interested in active maintainership, we'll be happy to enlarge the maintainers team. A good start would be to perform PR reviews and testing of existing PRs. I'm then happy to go through the release process with such person.

I may be able to setup an environment too and run tests, I'll have to automate it at maximum (if not already done) while I have time (I have some time currently) because I make lake time later on (so not really resolving the problem...).

I'm on Japanese time and have some personal stuff too attend to. So I'll take a look in 10ish hours.

ShamoX commented 2 years ago

For now I'm not able to test it, I have several errors, that I'm not sure if its link to a lack of configuration (I guess so).

ShamoX commented 2 years ago

@ares & @geemus it only needs some test really now. I've reviewed the changes and they seem ok. If you can give me a hand to setup the tests suite. You can contact me by email or with gitter.

I just connect to the gitter channel

ShamoX commented 2 years ago

OK, then, I finally understood how to run the tests. And it is working well on current origin/master and failing (spec: 10 errors ; unit: not loading) on NormandLemay/master.

It would be nice to anchor some automatic tests on travis or other CI services. I'll do it on an other branch. I let you know.

geemus commented 2 years ago

@ShamoX Glad you got that working, sorry I wasn't able to respond quicker for you. Setting something more automatic up sounds great, I've been limited by not really having access to a setup to work against.

ShamoX commented 2 years ago

I setup github-actions for that on #521 if you can check it @geemus and maybe merge it, it would help for the futur and this one.

geemus commented 2 years ago

@ShamoX thanks, merged #521.

geemus commented 2 years ago

Going to close and reopen in the hopes the new CI/actions will pick it up.

ShamoX commented 2 years ago

It seems that you have something to do as a maintainer to activate the actions :

First-time contributors require maintainers to approve workflows in order to run. Learn more.

geemus commented 2 years ago

@ShamoX ah yes, thanks for the nudge. Should be running shortly.

mephmanx commented 2 years ago

Any thoughts on the failures? They look like something was not configured correctly...

ShamoX commented 2 years ago

First there was a problem with what I proposed as modifications (corrected by @NormandLemay).

Now the problem is that we need to regenerate Cassettes with a newer API of Openstack, else the Identity API (on cassette) is not giving the right endpoints (I think).

I'm trying to dig how to re-generate the cassettes, sorry for the delay.

ares commented 2 years ago

If I'm not mistaken, just delete the spec/fixtures/openstack/* and rerun the tests, however you must have the actual OpenStack instance and configure the URL and credentials as OS_AUTH_URL, OS_USERNAME, OS_PASSWORD env variables.

I still didn't get an instance myself, but I got someone who can generally test it works. Not sure if it will be possible to actually get new cassetes from them too. I'll also try deploying https://docs.openstack.org/devstack/latest/ and see if that would work.

ShamoX commented 2 years ago

Thanks, this allowed me to easily see which file was involved in the VCRs and the identification (did not had a lot of time to search).

So in fact, you "just" have to add OS_AUTH_URL to not use stored VCRs, at least to test it.

Nevertheless I tried with my OVH authorisation, and most of the requests seems unauthorised despite the fact that I created a full rights credentials and that some Identity requests are working... I probably need to setup a administrable instance of Openstack...

ShamoX commented 2 years ago

Hello all,

I successfully installed a self hosted version of Openstack (created a VM and installed like said here).

A lot of tests didn't passed:

99 runs, 249 assertions, 3 failures, 57 errors, 10 skips

Some seem defect of the installation. Others seem a little weirder:

 28) Failure:
Fog::OpenStack::Image#test_0008_Adds and deletes image tags [/Users/rlaures/dev/tiers/fog/fog-openstack/spec/image_v2_spec.rb:248]:
--- expected
+++ actual
@@ -1 +1 @@
-["tag4", "tag1", "tag2", "tag3"]
+["tag1", "tag4", "tag2", "tag3"]

Maybe some realtime help may be useful because I used OpenStack as a user and never as an admin. So I might not be able (easily) to overcome those errors.

Also I think I'll propose something that will cover #112.

ares commented 2 years ago

I just finished deploying the devstack too but you beat me to run the tests. Can you try running them without this patch applied? If it doesn't cause any regressions, there's no point in blocking the merge, but it would reveal another problem that should be solved.

The test you listed output for is clearly broken as it relies on the order of tags being all time the same. The fix for that is to add .sort to both expected and the actual value in that assertion.

ShamoX commented 2 years ago

Ok. I did it and on master we have this :

99 runs, 242 assertions, 4 failures, 57 errors, 10 skips

The failures are:

ares commented 2 years ago

Ok, so in other words, this PR doesn't introduce new errors or failures right?

ShamoX commented 2 years ago

Yes. I would suggest to make a prerelease after the merge to prevent it to be wildly diffused but still usable.

mephmanx commented 2 years ago

Yes, thank you all! Could you let me know when I could let the BOSH team know that this is ready?

From: Marek Hulán @.> Date: Friday, February 4, 2022 at 4:31 AM To: fog/fog-openstack @.> Cc: Chris Lyons @.>, Comment @.> Subject: Re: [fog/fog-openstack] Adding volume v3 supported (#518)

@ares approved this pull request.

Thanks @NormandLemayhttps://github.com/NormandLemay, @ShamoXhttps://github.com/ShamoX and everyone else involved. Merging this in. I'll see if I can merge some other PRs and then will do a 1.1.0.pre release.

— Reply to this email directly, view it on GitHubhttps://github.com/fog/fog-openstack/pull/518#pullrequestreview-872856322, or unsubscribehttps://github.com/notifications/unsubscribe-auth/AMNAQ2Q2CH2YTTLAZ3BHDXDUZOMFHANCNFSM5BC5RIHA. You are receiving this because you commented.Message ID: @.***>

This email has been scanned by Inbound Shield™.

mephmanx commented 2 years ago

Did this get built/deployed out to the world yet or were there still other changes being merged into 1.1?

ShamoX commented 2 years ago

@ares do you need anything else to make the prerelease at least ?

Best.

mephmanx commented 2 years ago

No, that was the big thing for me. Once this is released I need to take this right over to the BOSH team and work with them to get deployments working on Openstack Xena. I was unable to deploy director/jumpbox as the libs it used used fog which used cinder v2 and Xena was not happy with that. This is the first (and hopefully biggest) piece I need.

From: Roland Laurès @.> Date: Friday, February 11, 2022 at 8:50 AM To: fog/fog-openstack @.> Cc: Chris Lyons @.>, Comment @.> Subject: Re: [fog/fog-openstack] Adding volume v3 supported (#518)

@areshttps://github.com/ares do you need anything else to make the prerelease at least ?

Best.

— Reply to this email directly, view it on GitHubhttps://github.com/fog/fog-openstack/pull/518#issuecomment-1036233467, or unsubscribehttps://github.com/notifications/unsubscribe-auth/AMNAQ2RRS4OVZBGZ6HATI53U2UH3VANCNFSM5BC5RIHA. You are receiving this because you commented.Message ID: @.***>

This email has been scanned by Inbound Shield™.

ares commented 2 years ago

Hello, the prerelease has been published https://rubygems.org/gems/fog-openstack/versions/1.1.0.pre please give it a test, if no regressions are reported, I'm happy to drop the .pre suffix.

jpalermo commented 2 years ago

@mephmanx, we see a PR to bump this release into the bosh openstack cpi release. We're a bit hesitant to do so with a pre release version. Have you done enough validation that @ares could cut the 1.1.0 release?

mephmanx commented 2 years ago

I just saw this a bit ago...very much appreciate the effort and response! I can set up a test but the last few weeks have been focused on offline/airgap installs of openstack/cloudfoundry. I have new hardware coming tomorrow which I need to rack up and after that should be able to get some time on them to load that build up and test and publish results. It might take me a week or two if that is ok. I will definitely get the testing in to assist as that will let me greenlight the effort to move to Xena and beyond.

yaguangtang commented 2 years ago

https://github.com/cloudfoundry/bosh-openstack-cpi-release/pull/246#issuecomment-1077461121 someone has tested bosh-openstack-cpi-release with fog-openstack 1.1.0.pre and it works with OpenStack Xena version.

yaguangtang commented 2 years ago

Started installing CPI
Compiling package 'ruby-3.1.0-r0.81.0/9cf1a9582c595fd03eb9c73cd3c4b2d10de10c139f99ab0b9d1a71e5d2b91378'... Finished (00:06:33)
Compiling package 'bosh_openstack_cpi/2252c3fe19699092e30b13e7899b72480cccb64586364be8828f5bd1ee63d23c'... Finished (00:00:17)
Installing packages... Finished (00:00:01)
Rendering job templates... Finished (00:00:00)
Installing job 'openstack_cpi'... Finished (00:00:00)
Finished installing CPI (00:06:53)

Uploading stemcell 'bosh-openstack-kvm-ubuntu-bionic-go_agent/1.71'... Skipped [Stemcell already uploaded] (00:00:00)

Started deploying
Waiting for the agent on VM 'f21a253d-c862-4a16-b496-545cb34c3bd8'... Finished (00:00:00)
Running the pre-stop scripts 'unknown/0'... Finished (00:00:00)
Draining jobs on instance 'unknown/0'... Finished (00:00:32)
Stopping jobs on instance 'unknown/0'... Finished (00:00:01)
Running the post-stop scripts 'unknown/0'... Finished (00:00:00)
Unmounting disk 'd846a553-68aa-440d-b08e-aa1d4b1e8e8b'... Finished (00:00:01)
Deleting VM 'f21a253d-c862-4a16-b496-545cb34c3bd8'... Finished (00:00:33)
Creating VM for instance 'bosh/0' from stemcell '2599e8d0-86c3-420c-8cf7-dc94059a2d1a'... Finished (00:00:38)
Waiting for the agent on VM '2be7977b-2762-4bee-9490-fa9f15da415d' to be ready... Finished (00:00:54)
Attaching disk 'd846a553-68aa-440d-b08e-aa1d4b1e8e8b' to VM '2be7977b-2762-4bee-9490-fa9f15da415d'... Finished (00:00:36)
Rendering job templates... Finished (00:00:29) Compiling package 'golang-1-linux/79f531850e62e3801f1dfa4acd11c421aebe653cd4316f6e49061818071bb617'... Skipped [Package already compiled] (00:00:01) Compiling package 'openjdk_1.8.0/225f67373c9ad0a1da464aeb92f06207bd3e8da1'... Skipped [Package already compiled] (00:00:00) Compiling package 'ruby-3.0.2-r0.67.0/6e7cdc8d83980db10cc330a5d2b034f40d5e3f187f6046f972df8a1f040125c6'... Skipped [Package already compiled] (00:00:00) Compiling package 'ruby-3.1.0-r0.81.0/9cf1a9582c595fd03eb9c73cd3c4b2d10de10c139f99ab0b9d1a71e5d2b91378'... Finished (00:07:49) Compiling package 'tini/3d7b02f3eeb480b9581bec4a0096dab9ebdfa4bc'... Skipped [Package already compiled] (00:00:00) Compiling package 'bpm-runc/b268374cfecfbe74f06628bc277ffd16bf9e31e0'... Skipped [Package already compiled] (00:00:00) Compiling package 'mysql/f2403855d0c82f38f5c4bd99d51a401bcb1be847a6d934123df91db3eaf4ac00'... Skipped [Package already compiled] (00:00:00) Compiling package 'libpq/ecbfa62322b4124f25372a19d68b83295b4d290503153667ec378e3196c45f69'... Skipped [Package already compiled] (00:00:00) Compiling package 'credhub/33ea568aad1d35e9522c56f792d3d4fc3cd5975d'... Skipped [Package already compiled] (00:00:01) Compiling package 'gonats/f58980bd4b0436ff65f588627116dfff63f346f4d13175b7ba47380ab89e08a6'... Skipped [Package already compiled] (00:00:00) Compiling package 'bosh-gcscli/52223432539bbd0607db053f542440869688b4404dd65f2ddf33c2d195b1b891'... Skipped [Package already compiled] (00:00:00) Compiling package 'uaa/4f77a97610b962f50d0c21067b48bd467db6066855318c766af8bc1cb990e799'... Skipped [Package already compiled] (00:00:04) Compiling package 'health_monitor/01845aa5368710c45544d3fefac7cabd63884954b9c9e9a8d38a41784510e3ce'... Skipped [Package already compiled] (00:00:00) Compiling package 'bosh_openstack_cpi/2252c3fe19699092e30b13e7899b72480cccb64586364be8828f5bd1ee63d23c'... Finished (00:00:21) Compiling package 'bpm/5a4640c75bedcc0ac3ee538bb412439ed93952af'... Skipped [Package already compiled] (00:00:00) Compiling package 'luna-hsm-client-7.4/746f3c30aadc0af7afc2d5cddcc16d8836a8f845'... Skipped [Package already compiled] (00:00:00) Compiling package 'director/ada2b7130ef8c0f94a419011c7037581f42b916b66365d0d8847864ca8219d23'... Skipped [Package already compiled] (00:00:00) Compiling package 'postgres-9.4/601f3635b43d0e7ba3ae866e3bd69425cdf33f7fb34a7f1bb21cc26818fb598e'... Skipped [Package already compiled] (00:00:00) Compiling package 'verify_multidigest/64d1958934e10a0eccc05ddf0d7ba0c8215e6f6d4c227cb93998087335378fa8'... Skipped [Package already compiled] (00:00:00) Compiling package 'nginx/ea3eadaa82bb9344018a8798a825b98315b1195bb1d495257f38421b0b7618a5'... Skipped [Package already compiled] (00:00:00) Compiling package 'postgres-10/708f8446db4ac7bb21bddce9938e217c741a6e6f82f6209f7e6f6a2b5b25eed3'... Skipped [Package already compiled] (00:00:00) Compiling package 's3cli/7e752dee192da026f6a0cdf2653b855cc6efbe6b041564660f8520c39ddd5a78'... Skipped [Package already compiled] (00:00:00) Compiling package 'davcli/58f558960854f58c55e3d506d3906019178dbc189fbbed1616b8b3c7c02142ea'... Skipped [Package already compiled] (00:00:00) Updating instance 'bosh/0'... Finished (00:04:18) Waiting for instance 'bosh/0' to be running... Finished (00:01:35) Running the post-start scripts 'bosh/0'... Finished (00:00:34)

tested with 1.1.0.pre fog-openstack , the CPI works with BOSH and OpenStack cinder v3 APi endpoint.

@ares please help release fog-openstack 1.1.0

[root@kolla ~]# nova list +----+------+--------+------------+-------------+----------+ | ID | Name | Status | Task State | Power State | Networks | +----+------+--------+------------+-------------+----------+ +----+------+--------+------------+-------------+----------+ [root@kolla ~]# nova list --all-tenants +--------------------------------------+----------------------------------------------------------+----------------------------------+--------+------------+-------------+---------------------------------------------------------------------+ | ID | Name | Tenant ID | Status | Task State | Power State | Networks | +--------------------------------------+----------------------------------------------------------+----------------------------------+--------+------------+-------------+---------------------------------------------------------------------+ | e72d097f-b49a-4c0c-a8a4-90a256325b5f | amphora-2bc72ebf-60ad-43cc-aa47-c92737728b1a | fdcf130138ba4c3cb3b721c32acae4e7 | ACTIVE | - | Running | cf-z0=10.0.16.38; lb-mgmt-net=10.1.0.135 | | 1ead7103-8b14-4cf9-8667-47553d420e0c | api/672eb1b6-ad1b-44e6-8c37-97caa41beb95 | 6bec358f20b449fcbcb65c821d982cc6 | ACTIVE | - | Running | cf-z0=10.0.16.56 | | 2be7977b-2762-4bee-9490-fa9f15da415d | bosh/0 | 6bec358f20b449fcbcb65c821d982cc6 | ACTIVE | - | Running | bbl-env-great-salt-2022-06-02t00-18z-network=10.0.1.6 | | e0d0cdb9-8cda-4604-85bd-6f141ea540e1 | cc-worker/b3ff3b38-5db7-4375-a440-7e585fc03f50 | 6bec358f20b449fcbcb65c821d982cc6 | ACTIVE | - | Running | cf-z0=10.0.16.57 | | bb6878e7-e401-4d25-9dfa-c83b1c4b4e6d | credhub/1a68fb02-a6f5-468f-99e4-6dce1b096493 | 6bec358f20b449fcbcb65c821d982cc6 | ACTIVE | - | Running | cf-z0=10.0.16.65 | | 7d2f245c-48d4-4272-adab-f6b71635ec82 | database/a72ea8f7-1104-42b6-8b3a-47baa17bbd9a | 6bec358f20b449fcbcb65c821d982cc6 | ACTIVE | - | Running | cf-z0=10.0.16.52 | | 998e0b82-a314-4fdc-a296-9c65e46c8daa | diego-api/ffa9d3af-3b78-4fde-a42f-5d90f10d8d71 | 6bec358f20b449fcbcb65c821d982cc6 | ACTIVE | - | Running | cf-z0=10.0.16.53 | | d9430455-2dcf-4c2c-b34f-c30b6f352516 | diego-cell/ed522ba1-d78e-4a89-8624-99bb413784e8 | 6bec358f20b449fcbcb65c821d982cc6 | ACTIVE | - | Running | cf-z0=10.0.16.63 | | e57a1281-5601-45b5-859c-734072629e9b | doppler/70e40230-b2b9-44fd-8c6c-2c587eb86fca | 6bec358f20b449fcbcb65c821d982cc6 | ACTIVE | - | Running | cf-z0=10.0.16.62 | | c4c00376-72c1-4b93-b9bd-bd82f49af292 | jumpbox/0 | 6bec358f20b449fcbcb65c821d982cc6 | ACTIVE | - | Running | bbl-env-great-salt-2022-06-02t00-18z-network=10.0.1.5, 10.0.200.168 | | 74ba64f9-d68f-47f4-8157-e16ad1e9ba69 | log-api/887b6a06-f92d-4e9f-8911-6b52476f781b | 6bec358f20b449fcbcb65c821d982cc6 | ACTIVE | - | Running | cf-z0=10.0.16.64 | | f25b99f1-8c47-47e2-8d83-4f37d90e9fb4 | log-cache/7703503e-9bd8-416d-a66a-2a9dfc2d7814 | 6bec358f20b449fcbcb65c821d982cc6 | ACTIVE | - | Running | cf-z0=10.0.16.61 | | c9c04fba-c0d0-471d-b680-0804e9167a02 | nats/043d4f5b-7ee7-4921-8e0c-0154d0543490 | 6bec358f20b449fcbcb65c821d982cc6 | ACTIVE | - | Running | cf-z0=10.0.16.51 | | cbc3f2b0-ea15-4ae1-a30c-fe0edbdf9bc5 | router/ea4c32e4-4d61-41ed-90a5-a072fa358500 | 6bec358f20b449fcbcb65c821d982cc6 | ACTIVE | - | Running | cf-z0=10.0.16.59 | | 22f2ca48-0270-4b90-8f2b-fb77a10494c9 | scheduler/834b623d-73db-4db4-8491-02ae1709c77e | 6bec358f20b449fcbcb65c821d982cc6 | ACTIVE | - | Running | cf-z0=10.0.16.58 | | f6a3485f-712c-4198-86c5-04dfabd9f35f | singleton-blobstore/436f1340-61ea-4f00-b1f4-bdc34349fdd5 | 6bec358f20b449fcbcb65c821d982cc6 | ACTIVE | - | Running | cf-z0=10.0.16.55 | | a057c154-c923-4b92-85d3-926f660cd205 | uaa/fe2a7366-dfbd-4e5e-b944-aa3d66a05645 | 6bec358f20b449fcbcb65c821d982cc6 | ACTIVE | - | Running | cf-z0=10.0.16.54 | +--------------------------------------+----------------------------------------------------------+----------------------------------+--------+------------+-------------+---------------------------------------------------------------------+ [root@kolla ~]# cinder list --all-tenants WARNING:cinderclient.shell:API version 3.68 requested, WARNING:cinderclient.shell:downgrading to 3.64 based on server support. +--------------------------------------+----------------------------------+--------+---------------------------------------------+------+-------------+----------+--------------------------------------+ | ID | Tenant ID | Status | Name | Size | Volume Type | Bootable | Attached to | +--------------------------------------+----------------------------------+--------+---------------------------------------------+------+-------------+----------+--------------------------------------+ | d846a553-68aa-440d-b08e-aa1d4b1e8e8b | 6bec358f20b449fcbcb65c821d982cc6 | in-use | volume-f6adcb5d-f31a-498e-bda7-9fe3ba5d5724 | 64 | DEFAULT | false | 2be7977b-2762-4bee-9490-fa9f15da415d | +--------------------------------------+----------------------------------+--------+---------------------------------------------+------+-------------+----------+--------------------------------------+

ares commented 2 years ago

Hi, I could do the 1.0.0 release, but I'd like to first clarify the potential regression linked above

mephmanx commented 2 years ago

The 1.1.0 release, correct? dropping the -pre. Are you talking about the test outputs linked above? Do we need other testing or changes? I still have this in my pipeline and need to make the leap...things do work on Xena with cinder v3 on the 1.1.0-pre. Couldent also users add the version specifier to the package if it causes and issue for them? I know I have had to do that often as getting the latest has often broken many things for me over the past year...it is a function of what I have had to build around openstack and other infra components I use. If there is anything I can try to do or ask someone on my team to do, I can. If we just need to get ahold of who reported the test results above to get their feel on them, hopefully they will see this and could take a look...

Thank you again for all the effort on this.

yaguangtang commented 2 years ago

@ares the above regression seems to be an config issue, and have been closed and updated. so no regression for 1.1.0 so far.

yaguangtang commented 2 years ago

seriously please consider do a 1.1.0 release, thanks.

ares commented 2 years ago

Ok, thanks for the confirmation on the misconfiguration. I just released 1.1.0 without the pre. Hope this helps https://rubygems.org/gems/fog-openstack/versions/1.1.0