Closed moly7x closed 2 years ago
1) Regarding QEMU: Install all qemu* packages from my repo, not just qemu
:
ii qemu 1:5.2+dfsg-10+vitastor1 amd64 fast processor emulator, dummy package
ii qemu-block-extra 1:5.2+dfsg-10+vitastor1 amd64 extra block backend modules for qemu-system and qemu-utils
ii qemu-system-arm 1:5.2+dfsg-10+vitastor1 amd64 QEMU full system emulation binaries (arm)
ii qemu-system-common 1:5.2+dfsg-10+vitastor1 amd64 QEMU full system emulation binaries (common files)
ii qemu-system-data 1:5.2+dfsg-10+vitastor1 all QEMU full system emulation (data files)
ii qemu-system-gui:amd64 1:5.2+dfsg-10+vitastor1 amd64 QEMU full system emulation binaries (user interface and audio support)
ii qemu-system-x86 1:5.2+dfsg-10+vitastor1 amd64 QEMU full system emulation binaries (x86)
ii qemu-utils 1:5.2+dfsg-10+vitastor1 amd64 QEMU utilities
Then it will work. I probably should add a dependency on qemu-system-x86 or something like that instead of just the "qemu" package.
P.S: If you rebuild QEMU yourself, then you should also rebuild vitastor, QEMU lacks out-of-tree module build support, so it's a rather ugly scheme by now. I think I'll change that scheme by making a separate vitastor-dev package and moving the driver into the QEMU patch, then it'll be slightly less ugly :)
I've just pushed a fix, can you take mon/mon.js
from master branch, put it into /usr/lib/vitastor/mon/ and recheck?
etcd
autostartI'm not sure why it doesn't start automatically in your case, the systemd unit seems enabled. Can you check its logs at the moment when it doesn't start?
mon/mon.js
and recheck. I only got bad keys. This is exactly what's going on a few days ago when I'm trying to fix BigInt in mon.js
. (Not success, so I created Issues).Oct 21 13:18:01 controller node[22054]: Became master
Oct 21 13:18:01 controller node[22054]: Bad key in etcd: /vitastor/pool/stats/1 = {"used_raw_tb":0.001953125}
Oct 21 13:18:01 controller node[22054]: Bad key in etcd: /vitastor/pool/stats/1011 = {"used_raw_tb":"1835008"}
Oct 21 13:18:01 controller node[22054]: Bad key in etcd: /vitastor/pool/stats/104 = {"used_raw_tb":"917504"}
Oct 21 13:18:01 controller node[22054]: Bad key in etcd: /vitastor/pool/stats/111 = {"used_raw_tb":"917504"}
Oct 21 13:18:01 controller node[22054]: Bad key in etcd: /vitastor/pool/stats/1119 = {"used_raw_tb":"917504"}
Oct 21 13:18:01 controller node[22054]: Bad key in etcd: /vitastor/pool/stats/1187 = {"used_raw_tb":"917504"}
Oct 21 13:18:01 controller node[22054]: Bad key in etcd: /vitastor/pool/stats/1237 = {"used_raw_tb":"917504"}
Oct 21 13:18:01 controller node[22054]: Bad key in etcd: /vitastor/pool/stats/124 = {"used_raw_tb":"917504"}
Oct 21 13:18:01 controller node[22054]: Bad key in etcd: /vitastor/pool/stats/1240 = {"used_raw_tb":"917504"}
Oct 21 13:18:01 controller node[22054]: Bad key in etcd: /vitastor/pool/stats/1258 = {"used_raw_tb":"917504"}
Oct 21 13:18:01 controller node[22054]: Bad key in etcd: /vitastor/pool/stats/1349 = {"used_raw_tb":"917504"}
Oct 21 13:18:01 controller node[22054]: Bad key in etcd: /vitastor/pool/stats/1354 = {"used_raw_tb":"917504"}
Oct 21 13:18:01 controller node[22054]: Bad key in etcd: /vitastor/pool/stats/1374 = {"used_raw_tb":"917504"}
Oct 21 13:18:01 controller node[22054]: Bad key in etcd: /vitastor/pool/stats/1382 = {"used_raw_tb":"917504"}
Oct 21 13:18:01 controller node[22054]: Bad key in etcd: /vitastor/pool/stats/1434 = {"used_raw_tb":"917504"}
etcd
services log after I started manually. I don't know why it doesn't start automatically π₯²π₯²
-- Boot 61501c002ffb464b9340ea41dcfcff00 --
Oct 21 13:11:13 controller systemd[1]: Starting etcd for vitastor...
Oct 21 13:11:13 controller systemd[1]: Started etcd for vitastor.
Oct 21 13:11:13 controller etcd[12229]: [WARNING] Deprecated '--logger=capnslog' flag is set; use '--logger=zap' flag instead
Oct 21 13:11:13 controller etcd[12229]: etcd Version: 3.4.13
Oct 21 13:11:13 controller etcd[12229]: [WARNING] Deprecated '--logger=capnslog' flag is set; use '--logger=zap' flag instead
Oct 21 13:11:13 controller etcd[12229]: Git SHA: 416381529
Oct 21 13:11:13 controller etcd[12229]: Go Version: go1.14.4
Oct 21 13:11:13 controller etcd[12229]: Go OS/Arch: linux/amd64
Oct 21 13:11:13 controller etcd[12229]: setting maximum number of CPUs to 4, total number of available CPUs is 4
Oct 21 13:11:13 controller etcd[12229]: the server is already initialized as member before, starting as etcd member...
Oct 21 13:11:13 controller etcd[12229]: name = etcd0
Oct 21 13:11:13 controller etcd[12229]: data dir = /var/lib/etcd0.etcd
Oct 21 13:11:13 controller etcd[12229]: member dir = /var/lib/etcd0.etcd/member
Oct 21 13:11:13 controller etcd[12229]: heartbeat = 100ms
Oct 21 13:11:13 controller etcd[12229]: election = 1000ms
Oct 21 13:11:13 controller etcd[12229]: snapshot count = 100000
Oct 21 13:11:13 controller etcd[12229]: advertise client URLs = http://127.0.0.1:2379
Oct 21 13:11:13 controller etcd[12229]: initial advertise peer URLs = http://127.0.0.1:2380
Oct 21 13:11:13 controller etcd[12229]: initial cluster =
Oct 21 13:11:13 controller etcd[12229]: MaxRequestBytes 104857600 exceeds maximum recommended size 10485760
Oct 21 13:11:13 controller etcd[12229]: check file permission: directory "/var/lib/etcd0.etcd" exist, but the permission is "drwxr-xr-x". The recommended permission is "-rwx------" to prevent possible unprivileged access to the data.
Oct 21 13:11:13 controller etcd[12229]: restarting member ffbdd670f3d57de4 in cluster fbe9fd6e26158ef1 at commit index 79
Oct 21 13:11:13 controller etcd[12229]: raft2021/10/21 13:11:13 INFO: ffbdd670f3d57de4 switched to configuration voters=()
Oct 21 13:11:13 controller etcd[12229]: raft2021/10/21 13:11:13 INFO: ffbdd670f3d57de4 became follower at term 2
Oct 21 13:11:13 controller etcd[12229]: raft2021/10/21 13:11:13 INFO: newRaft ffbdd670f3d57de4 [peers: [], term: 2, commit: 79, applied: 0, lastindex: 79, lastterm: 2]
Oct 21 13:11:13 controller etcd[12229]: simple token is not cryptographically signed
Oct 21 13:11:13 controller etcd[12229]: restore compact to 50
Oct 21 13:11:13 controller etcd[12229]: starting server... [version: 3.4.13, cluster version: to_be_decided]
Oct 21 13:11:13 controller etcd[12229]: raft2021/10/21 13:11:13 INFO: ffbdd670f3d57de4 switched to configuration voters=(18428121030885473764)
Oct 21 13:11:13 controller etcd[12229]: added member ffbdd670f3d57de4 [http://127.0.0.1:2380] to cluster fbe9fd6e26158ef1
Oct 21 13:11:13 controller etcd[12229]: set the initial cluster version to 3.4
Oct 21 13:11:13 controller etcd[12229]: enabled capabilities for version 3.4
Oct 21 13:11:13 controller etcd[12229]: listening for peers on 127.0.0.1:2380
Oct 21 13:11:15 controller etcd[12229]: raft2021/10/21 13:11:15 INFO: ffbdd670f3d57de4 is starting a new election at term 2
Oct 21 13:11:15 controller etcd[12229]: raft2021/10/21 13:11:15 INFO: ffbdd670f3d57de4 became candidate at term 3
Oct 21 13:11:15 controller etcd[12229]: raft2021/10/21 13:11:15 INFO: ffbdd670f3d57de4 received MsgVoteResp from ffbdd670f3d57de4 at term 3
Oct 21 13:11:15 controller etcd[12229]: raft2021/10/21 13:11:15 INFO: ffbdd670f3d57de4 became leader at term 3
Oct 21 13:11:15 controller etcd[12229]: raft2021/10/21 13:11:15 INFO: raft.node: ffbdd670f3d57de4 elected leader ffbdd670f3d57de4 at term 3
Oct 21 13:11:15 controller etcd[12229]: ready to serve client requests
Oct 21 13:11:15 controller etcd[12229]: published {Name:etcd0 ClientURLs:[http://127.0.0.1:2379]} to cluster fbe9fd6e26158ef1
Oct 21 13:11:15 controller etcd[12229]: serving insecure client requests on 127.0.0.1:2379, this is strongly discouraged!
Oct 21 13:11:26 controller etcd[12229]: request "header:<ID:9071512588854003227 > lease_revoke:<id:7de47ca2fa208616>" with result
Before I start it:
systemctl status etcd
β etcd.service - etcd for vitastor
Loaded: loaded (/etc/systemd/system/etcd.service; enabled; vendor preset: enabled)
Active: inactive (dead)
1. After I apply new `mon/mon.js` and recheck. I only got bad keys. This is exactly what's going on a few days ago when I'm trying to fix BigInt in `mon.js`. (Not success, so I created Issues).
Ok I see please take it from master and recheck again :-) It seems to be related to situations when your OSDs have data in pools not listed in config/pools
Ok I see please take it from master and recheck again :-) It seems to be related to situations when your OSDs have data in pools not listed in config/pools
=)) tks, It works.
Oct 22 03:49:49 controller node[33528]: Waiting to become master
Oct 22 03:49:54 controller node[33528]: Waiting to become master
Oct 22 03:49:59 controller node[33528]: Waiting to become master
Oct 22 03:50:04 controller node[33528]: Became master
Oct 22 03:50:10 controller node[33528]: Data movement: 256 pgs, 512 pg*osds = 66.67 %
Oct 22 03:50:10 controller node[33528]: Total space (raw): 0 TB, space efficiency: 0 %
Oct 22 03:50:10 controller node[33528]: PG configuration successfully changed
Oct 22 03:50:13 controller node[33528]: Data movement: 256 pgs, 512 pg*osds = 66.67 %
Oct 22 03:50:13 controller node[33528]: Total space (raw): 0.02 TB, space efficiency: 100 %
Oct 22 03:50:13 controller node[33528]: PG configuration successfully changed
Hi @vitalif, I hope you can help me solve this Issues soon π’π’ . I really donβt know what should I do now. Host:
Cinder_volume (Inside container):
Problem 1 (QEMU):
Before rebuilding QEMU: I already installed vitastor-cli inside Cinder_volume but when I try using
qemu-img
(Inside Cinder_volume container). It always has a problem (Both 2 method)After building QEMU: I rebuilded QEMU (1:4.2-3ubuntu6.17) and applying a Patch in vitastor/patch/ qemu-4.2-vitastor.patch But it still got same problem.
This is how I rebuild QEMU (I build in another VM, Ubuntu 20.04)
Problem 2 (BigInt):
Everytime I start my VM (host), I always need to start
etcd
either. (Not autostart)After I installed vitastor, I started
vitastor-mon
andvitastor.target
services. And run command like example inREADME
.And run fio testing
But after I restart my VM, and check
vitastor-mon
status. It got a problem.