Open yongshengma opened 6 years ago
BTW, i conducted a test on removing an offline node by using remove node <ip>
command. It works fine!
However, I found DTL states of some vdisks' are in red. I also found the preset didn't switch to an usable policy for two nodes (only one alba-asd per node). After i manually switched the policy order by moving 2-dup ahead, the DTL states became green. The policies I defined : (1, 1, 1, 2), (1, 2, 1, 3). Not sure if this operation is related to DTL state.
Hi @yongshengma Which Framework version are you using? I can't seem to find any code pointing to self.identifier within the try of the pyrakoon You can patch this using self._identifer instead of self.identifer in that part of the code as identifier is a 'private' property Best regards
Hi @JeffreyDevloo
This is Fargo version.
ii blktap-openvstorage-utils 2.0.90-2ubuntu5 amd64 utilities to work with VHD disk images files
ii libblktapctl0-openvstorage 2.0.90-2ubuntu5 amd64 Xen API blktapctl shared library (shared library)
ii libvhd0-openvstorage 2.0.90-2ubuntu5 amd64 VHD file format access library
ii libvhdio-2.0.90-openvstorage 2.0.90-2ubuntu5 amd64 Xen API blktap shared library (shared library)
ii openvstorage 2.9.5-1 amd64 openvStorage
ii openvstorage-backend 1.9.1-1 amd64 openvStorage Backend plugin
ii openvstorage-backend-core 1.9.1-1 amd64 openvStorage Backend plugin core
ii openvstorage-backend-webapps 1.9.1-1 amd64 openvStorage Backend plugin Web Applications
ii openvstorage-core 2.9.5-1 amd64 openvStorage core
ii openvstorage-hc 1.9.1-1 amd64 openvStorage Backend plugin HyperConverged
ii openvstorage-sdm 1.9.0-1 amd64 Open vStorage Backend ASD Manager
ii openvstorage-webapps 2.9.5-1 amd64 openvStorage Web Applications
ii tgt-openvstorage 99:1.0.63-0ovs1.4-1ubuntu1.1 amd64 Linux SCSI target user-space daemon and too
I will take a try on your suggestion later.
Yes, it's obviously a bug in code. _identifer is defined instead of identifer.
def __init__(self, cluster, nodes):
"""
Initializes the client
"""
cleaned_nodes = {}
for node, info in nodes.iteritems():
cleaned_nodes[str(node)] = ([str(entry) for entry in info[0]], int(info[1]))
self._config = ArakoonClientConfig(str(cluster), cleaned_nodes)
self._client = ArakoonClient(self._config)
self._identifier = int(round(random.random() * 10000000))
self._lock = Lock()
self._batch_size = 500
self._sequences = {}
I got an error when I removed an online node. It didn't abort but continued until successful message returned. Everything also looks OK on GUI and that node is gone except another node has warning for a few minutes.