vitalif / vitastor

Simplified distributed block and file storage with strong consistency, like in Ceph (repository mirror)
https://vitastor.io
Other
140 stars 22 forks source link

how to delete an img? #53

Closed vieyahn2017 closed 1 year ago

vieyahn2017 commented 1 year ago

vitastor-cli create -s 10G testimg how to delete it ???

vieyahn2017 commented 1 year ago

[root@server3 ~]# vitastor-cli ls NAME POOL SIZE FLAGS PARENT testimg ecpool 10 G - [root@server3 ~]# vitastor-cli df NAME SCHEME PGS TOTAL USED AVAILABLE USED% EFFICIENCY 0 B 0 B 0 B 100% 0% ecpool EC 2+2 256 2 T 0 B 2 T 0% 100%

to

[root@server3 ~]# vitastor-cli df NAME SCHEME PGS TOTAL USED AVAILABLE USED% EFFICIENCY testpool 2/1 256 999.7 G 0 B 999.7 G 0% 100% 0 B 0 B 0 B 100% 0% [root@server3 ~]# vitastor-cli ls terminate called after throwing an instance of 'std::out_of_range' what(): map::at Aborted

===

when I first run:

etcdctl --endpoints=... put /vitastor/config/pools '{"1":{"name":"testpool", "scheme":"replicated","pg_size":2,"pg_minsize":1,"pg_count":256,"failure_domain":"host"}}'

then run:

etcdctl --endpoints=... put /vitastor/config/pools '{"2":{"name":"ecpool", "scheme":"ec","pg_size":4,"parity_chunks":2,"pg_minsize":2,"pg_count":256,"failure_domain":"host"}}'

change the /vitastor/config/pools and then vitastor-cli ls occurs aborted

vieyahn2017 commented 1 year ago

[root@server3 ~]# vitastor-cli rm-data --pool 2 --inode 1 ^C [root@server3 ~]# vitastor-cli rm-data --pool 1 --inode 1 Failed to list objects of inode 1 from pool 1

vieyahn2017 commented 1 year ago

[root@server3 ~]# vitastor-cli ls -p 1 terminate called after throwing an instance of 'std::out_of_range' what(): map::at Aborted [root@server3 ~]# vitastor-cli ls -p 2 NAME SIZE FLAGS PARENT testimg 10 G

vitalif commented 1 year ago

Hi Use vitastor-cli rm to delete an image Also do you understand etcdctl --endpoints=... put /vitastor/config/pools ... correctly? If you wanted to add pool with ID 2 you should've run

etcdctl --endpoints=... put /vitastor/config/pools '{"1":{"name":"testpool","scheme":"replicated","pg_size":2,"pg_minsize":1,"pg_count":256,"failure_domain":"host"},"2":{"name":"ecpool","scheme":"ec","pg_size":4,"parity_chunks":2,"pg_minsize":2,"pg_count":256,"failure_domain":"host"}}'

I.e you should've included both pools in this key

vitalif commented 1 year ago

terminate called after throwing an instance of 'std::out_of_range' - what(): map::at - Aborted is probably caused by the fact that you dropped pool 1 configuration. It shouldn't die with an exception in this case though, so I'll check it

vieyahn2017 commented 1 year ago

thanks. vitastor-cli rm is ok.

I didn't understand the meaning of the [] parameter before. vitastor-cli rm [] [--writers-stopped]

test records:

[root@server3 vitastor-0.9.3]# vitastor-cli create -s 10G testimg55 Image testimg55 created [root@server3 vitastor-0.9.3]# vitastor-cli ls terminate called after throwing an instance of 'std::out_of_range' what(): map::at Aborted [root@server3 vitastor-0.9.3]# vitastor-cli ls -p 1 NAME SIZE FLAGS PARENT testimg2 10 G - testimg55 10 G - [root@server3 vitastor-0.9.3]# vitastor-cli ls -p 1 NAME SIZE FLAGS PARENT testimg2 10 G - testimg55 10 G - [root@server3 vitastor-0.9.3]# vitastor-cli rm testimg55 Done, inode 2 from pool 1 removed Layer testimg55 deleted [root@server3 vitastor-0.9.3]# vitastor-cli ls -p 1 NAME SIZE FLAGS PARENT testimg2 10 G -

vieyahn2017 commented 1 year ago

The reason may be related to the following:

[root@server3 ~]# ll /dev/vitastor/ lrwxrwxrwx. 1 root root 7 Jul 12 15:19 osd1-data -> ../dm-9 lrwxrwxrwx. 1 root root 8 Jul 12 15:19 osd2-data -> ../dm-10

etcdctl --endpoints=51.32.27.7:2379 put /vitastor/config/pools '{"2":{"name":"ecpool", "scheme":"ec","pg_size":4,"parity_chunks":2,"pg_minsize":2,"pg_count":256,"failure_domain":"host"}}'

[root@server3 ~]# etcdctl --endpoints=51.32.27.7:2379 get --prefix /vitastor/pg/state | wc -l 512 [root@server3 ~]# etcdctl --endpoints=51.32.27.7:2379 get --prefix /vitastor/pg/state | more /vitastor/pg/state/2/1 {"peers": [], "primary": 1, "state": ["incomplete"]} /vitastor/pg/state/2/10 {"peers": [], "primary": 1, "state": ["incomplete"]} /vitastor/pg/state/2/100 {"peers": [], "primary": 2, "state": ["incomplete"]} /vitastor/pg/state/2/101 {"peers": [], "primary": 1, "state": ["incomplete"]}

all of the PG state is incomplete

and I create an img:

vitastor-cli create -s 10G testimg

And finally, I change pool config to 1 etcdctl --endpoints=... put /vitastor/config/pools '{"1":{"name":"testpool", "scheme":"replicated","pg_size":2,"pg_minsize":1,"pg_count":256,"failure_domain":"host"}}'

vieyahn2017 commented 1 year ago

4673 etcdctl --endpoints=51.32.27.7:2379 put /vitastor/config/pools '{"1":{"name":"testpool", 4674 "scheme":"replicated","pg_size":2,"pg_minsize":1,"pg_count":256,"failure_domain":"host"}}' 4675 vitastor-cli status 4676 etcdctl --endpoints=51.32.27.7:2379 put /vitastor/config/pools '{"2":{"name":"ecpool", 4677 "scheme":"ec","pg_size":4,"parity_chunks":2,"pg_minsize":2,"pg_count":256,"failure_domain":"host"}}' 4678 etcdctl --endpoints=51.32.27.7:2379 get --prefix /vitastor/pg/state 4679 etcdctl --endpoints=51.32.27.7:2379 get --prefix /vitastor/pg/state | more 4680 etcdctl --endpoints=51.32.27.7:2379 get --prefix /vitastor/pg/state | wc -l 4681 etcdctl --endpoints=51.32.27.7:2379 get --prefix /vitastor/pg/state | more 4682 etcdctl --endpoints=51.32.27.7:2379 get --prefix /vitastor/pg/state 4683 etcdctl --endpoints=51.32.27.7:2379 get /vitastor/config/pools 4684 vitastor-cli create -s 10G testimg 4685 vitastor-cli status 4686 etcdctl --endpoints=51.32.27.7:2379 get /vitastor/config/global 4687 ls 4688 vitastor-cli ls 4689 vitastor-cli df 4690 etcdctl --endpoints=51.32.27.7:2379 get --prefix /vitastor/pg/state 4691 vi /var/log/vitastor/osd1.log 4692 tail -f /var/log/vitastor/osd1.log 4693 tail -f /var/log/vitastor/osd2.log 4694 vitastor-cli df 4695 etcdctl --endpoints=51.32.27.7:2379 get /vitastor/config/global 4696 etcdctl --endpoints=... get --prefix /vitastor/pg/state 4697 etcdctl --endpoints=51.32.27.7 get --prefix /vitastor/pg/state 4698 etcdctl --endpoints=51.32.27.7:2379 get --prefix /vitastor/pg/state 4699 etcdctl --endpoints=51.32.27.7:2379 get --prefix /vitastor/pg/state | more 4700 vi /var/log/vitastor/osd2.log 4701 etcdctl --endpoints=51.32.27.7:2379 put /vitastor/config/pools '{"1":{"name":"testpool", 4702 "scheme":"replicated","pg_size":2,"pg_minsize":1,"pg_count":256,"failure_domain":"host"}}' 4703 etcdctl --endpoints=51.32.27.7:2379 get --prefix /vitastor/pg/state | more 4704 4705 vi /var/log/vitastor/osd2.log 4706 etcdctl --endpoints=51.32.27.7:2379 get --prefix /vitastor/pg/state | more 4707 vitastor-cli df 4708 vitastor-cli ls 【Aborted occurs】 4709 vitastor-cli 4710 vitastor-cli status 4711 vitastor-cli df 4712 vitastor-cli ls 4713 vi /var/log/vitastor/osd2.log 4714 vi /var/log/messages 4715 vi /var/log/messages 4716 ps -ef | grep 2.sh 4717 sh 2.sh & 4718 history 4719 vitastor-cli create -s 10G testimg 4720 vitastor-cli create -s 10G testimg2 4721 vitastor-cli ls 4722 vitastor-cli df 4723 vitastor-cli status 4724 systemctl restart vitastor.target 4725 vitastor-cli df 4726 vitastor-cli ls

vitalif commented 1 year ago

Fixed that "aborted with map::at" in master

vieyahn2017 commented 1 year ago

i also deal with it ok

===

[root@node1 vitastor]# vitastor-cli ls terminate called after throwing an instance of 'std::out_of_range' what(): map::at Aborted [root@node1 vitastor]# vitastor-cli ls -p 1 NAME SIZE FLAGS PARENT [root@node1 vitastor]# vitastor-cli ls -p 2 terminate called after throwing an instance of 'std::out_of_range' what(): map::at Aborted

[root@node1 vitastor]# etcdctl --endpoints=51.32.27.7:2379 get --prefix /vitastor/ | more /vitastor/config/inode/2/1 {"name": "testimg", "size": 10737418240}

【this exists caused aborted】

[root@node1 vitastor]# etcdctl --endpoints=51.32.27.7:2379 del /vitastor/config/inode/2/1 1 [root@node1 vitastor]# vitastor-cli ls -p 2 NAME SIZE FLAGS PARENT