Closed Matsue closed 10 years ago
Thank you for your report. We'll check this issue on next Monday.
I forgot to change replication number setting on configuration file last time. I tried same test with fixed configuration file but it show same responses.
status
[System config]
system version : 0.14.6
total replicas : 2
# of successes of R : 1
# of successes of W : 1
# of successes of D : 1
# of DC-awareness replicas : 0
# of Rack-awareness replicas : 0
ring size : 2^128
ring hash (cur) : 831f612a
ring hash (prev) : 831f612a
[Node(s) state]
-------------------------------------------------------------------------------------------------------
type node state ring (cur) ring (prev) when
-------------------------------------------------------------------------------------------------------
S storage_0@192.168.101.123 running 831f612a 831f612a 2014-03-31 14:46:10 +0900
S storage_0@192.168.101.124 running 831f612a 831f612a 2014-03-31 14:46:10 +0900
S storage_0@192.168.101.125 running 831f612a 831f612a 2014-03-31 14:46:10 +0900
S storage_0@192.168.101.126 running 831f612a 831f612a 2014-03-31 14:46:10 +0900
G gateway_0@192.168.101.122 running 831f612a 831f612a 2014-03-31 14:46:10 +0900
status
[System config]
System version : 1.0.0
Cluster Id : leofs_1
DC Id : dc_1
Total replicas : 2
# of successes of R : 1
# of successes of W : 1
# of successes of D : 1
# of DC-awareness replicas : 0
# of Rack-awareness replicas : 0
ring size : 2^128
Current ring hash : 1503900f
Prev ring hash : 1503900f
[Node(s) state]
-------+--------------------------------+--------------+----------------+----------------+----------------------------
type | node | state | current ring | prev ring | updated at
-------+--------------------------------+--------------+----------------+----------------+----------------------------
S | storage_0@192.168.101.123 | running | 831f612a | 831f612a | 2014-03-31 14:46:10 +0900
S | storage_0@192.168.101.124 | running | 831f612a | 831f612a | 2014-03-31 14:46:10 +0900
S | storage_0@192.168.101.125 | running | 831f612a | 831f612a | 2014-03-31 14:46:10 +0900
S | storage_0@192.168.101.126 | running | 831f612a | 831f612a | 2014-03-31 14:46:10 +0900
G | gateway_0@192.168.101.122 | running | 831f612a | 831f612a | 2014-03-31 14:46:10 +0900
status storage_0@192.168.101.123
[config]
version : 0.14.4
# of vnodes : 168
group level-1 :
group level-2 :
obj-container : [[{path,"/leofs"},{num_of_containers,8}]]
log dir : /usr/local/leofs/current/leo_storage/log
[status-1: ring]
ring state (cur) : 831f612a
ring state (prev) : 831f612a
[status-2: erlang-vm]
vm version : 5.9.3.1
total mem usage : 26146288
system mem usage : 11515568
procs mem usage : 14616520
ets mem usage : 775352
procs : 231/1048576
kernel_poll : true
thread_pool_size : 32
[status-3: # of msgs]
replication msgs : 0
vnode-sync msgs : 0
rebalance msgs : 0
suspend storage_0@192.168.101.123
[ERROR] Node not exist
status
[System config]
System version : 1.0.0
Cluster Id : leofs_1
DC Id : dc_1
Total replicas : 2
# of successes of R : 1
# of successes of W : 1
# of successes of D : 1
# of DC-awareness replicas : 0
# of Rack-awareness replicas : 0
ring size : 2^128
Current ring hash : 1503900f
Prev ring hash : 1503900f
[Node(s) state]
-------+--------------------------------+--------------+----------------+----------------+----------------------------
type | node | state | current ring | prev ring | updated at
-------+--------------------------------+--------------+----------------+----------------+----------------------------
S | storage_0@192.168.101.123 | restarted | 000000-1 | 000000-1 | 2014-03-31 14:57:43 +0900
S | storage_0@192.168.101.124 | running | 831f612a | 831f612a | 2014-03-31 14:46:10 +0900
S | storage_0@192.168.101.125 | running | 831f612a | 831f612a | 2014-03-31 14:46:10 +0900
S | storage_0@192.168.101.126 | running | 831f612a | 831f612a | 2014-03-31 14:46:10 +0900
G | gateway_0@192.168.101.122 | running | 831f612a | 831f612a | 2014-03-31 14:46:10 +0900
status storage_0@192.168.101.123
[config]
version : 1.0.0-pre3
# of vnodes : 168
group level-1 :
group level-2 :
obj-container : [[{path,"/leofs"},{num_of_containers,8}]]
log dir : /usr/local/leofs/current/leo_storage/log/erlang
[status-1: ring]
ring state (cur) : 000000-1
ring state (prev) : 000000-1
[status-2: erlang-vm]
vm version : 5.9.3.1
total mem usage : 55757880
system mem usage : 45413488
procs mem usage : 10365256
ets mem usage : 4875992
procs : 294/1048576
kernel_poll : true
thread_pool_size : 32
[status-3: # of msgs]
replication msgs : 0
vnode-sync msgs : 0
rebalance msgs : 0
resume storage_0@192.168.101.123
[ERROR] Node not exist
Sharing my operaion log:
I tried to upgrade LeoFS from 0.14.6 to latest version. After upgrade, I can not execute suspend command and resume command for storages. It response "[ERROR] Node not exist".
Settings overview
Operation logs
I followed this document. http://www.leofs.org/docs/admin_guide.html#upgrade-leofs-v0-14-9-v0-16-0-v0-16-5-to-v0-16-8-or-v1-0-0-pre3
On manager node
On storage node
Error logs
At manager_0
At manager_1
At gateway_0