Open tanshaolong opened 5 years ago
@tanshaolong I don't really understand what you mean. Are you saying that you create the volumes in cinderlib but you don't see them being created in the backend? Or are you saying that you have many volumes in the backend but cinderlib only sees the volumes you have created with it?
If it's the second, then it's a problem with the documentation, that it's not clear.
Cinderlib will only allow you to manage the resources that you have created with it, it will not see any other resources that are present in the backend.
We use the metadata persistence backend (for example a database) to know which resources are ours. We don't go to the backend and ask which volumes exist, it.
The refresh
method is meant to reload the cached volumes, so if you create a new volume in a different Python program that uses the same metadata persistence location you'll be able to see the volumes created by that other program.
Hi Gorka,
Sorry, my communitcation is not clear. The second is my mean.
I have some scenarioes that need to manage the resources are present in the backend, but they were not created by cinderlib. Do you have some suggestion for my questions? Thanks.
Thanks Ray
Managing existing volumes is not supported by cinderlib, and the IBM Cinder driver doesn't support the unmanage/manage commands (although this may not have helped).
If you really need this, then you'll have to hack it yourself with your own Python code. The actual code will depend on whether your code will be the only one managing the storage or if other applications/users will be creating new volumes while you are running.
If your code is the only one that will manage the volumes, then you need a "migration program" that will be executed only once and will list the existing volumes in the backend and create cinderlib volume objects from them and then save them.
If you want to be able to manage any new volume that appears in the backend, then you'll need reconciliation code that is able to list volumes, see which ones are new or have been removed and update the persistence metadata.
In either case you'll want to store the real name of those imported volumes in the id field, but for that you'll need to call global_setup
with non_uuid_ids=True
and volume_name_template='%s'
, both in the importing script and your final application.
You'll probably be able to get a list of online volumes, assuming your backend variable is called lvm
, by doing something like lvm.driver._master_backend_helpers.ssh.lsvdisks_from_filter('status', 'online')
, then you'll have to parse the output and use it to call cinderlib.Volume
to instantiate an object, and then just save it.
Pseudo code for the importer could be something like:
import math
import cinderlib
persistence_config = {'storage': 'db', 'connection': 'sqlite:///cl.sqlite'}
cinderlib.setup(persistence_config=persistence_config,
non_uuid_ids=True,
volume_name_template='%s')
svc = cl.Backend(
volume_driver='cinder.volume.drivers.ibm.storwize_svc.storwize_svc_fc.StorwizeSVCFCDriver',
san_ip='...',
san_login='superuser',
san_password='********',
storwize_svc_volpool_name='mdiskgrp1',
volume_backend_name='svc1234')
output = svc.driver._master_backend_helpers.ssh.lsvdisks_from_filter('status', 'online')
dict_volumes = parse_lsvdisks_output(delimiter='!')
for dict_vol in dict_volumes:
vol = cinderlib.Volume(id=dict_vol['name'],
size=math.ceil(dict_vol['size_bytes']/1024.0/1024/1024))
vol.save()
Then in your program you would need to do the same call to global_setup
and initialize the backend the same way.
You would probably need to do more things, since you would want to import the qos settings and other options.
Sorry I can't help you more, as I don't have access to an SVC system and I have never used one.
Thanks Groka, I meet some issues when save the existing volumes that not are created by cinderlib, but I think your solution work. I need to updat code for fix them. Thanks for your great help.
I refrence "Resource tracking" session in the cinderlib doc. I think cinderlib could upload all of volume instances to track by Backend.refresh() but the actualy result is not that. Is my understand for the doc right? Thank you
The below is the detail for the test: I create two volumes by create_volume and then I run backend.refresh(). I check the volumes again. The volumes are two yet. The original IBM SVC columes don't upload yet.
Test enviroment: Cinderlib version: v0.3.9 Cinder release: Pike Storage: IBM SVC V7000 Versions: Unknown Connection type: FC
Test steps and result:
refrence cinderlib doc: https://docs.openstack.org/cinderlib/latest/topics/tracking.html
Thanks Ray