singularityhub / sregistry-cli

Singularity Global Client for container management
https://singularityhub.github.io/sregistry-cli/
Mozilla Public License 2.0
14 stars 18 forks source link

add backend for Ceph #160

Closed vsoch closed 5 years ago

vsoch commented 5 years ago

This issue is in reference to discussion here https://github.com/singularityhub/sregistry/issues/160 to add a backend for ceph. I think I can get around needing a full deployment by using the ceph daemon with an example here. There also seems to be a Python API that the user can install, something like:

pip install sregistry[ceph]

As discussed with @michaelmoore10 we will want to add this endpoint here, and then work on further integrating with the registry server. I think I know how I will go about doing this but want to cleanly implement this portion first. I'll update the issue here with links to questions (or a PR in progress), etc.

vsoch commented 5 years ago

hey @michaelmoore10 I'm futzing around with the docker ceph/daemon container and am not sure which deployment is the right one to bring up a "basic cluster". I've tried the one shown in the demo:

docker run -d \
--name demo \
-e MON_IP=0.0.0.0 \
-e CEPH_PUBLIC_NETWORK=0.0.0.0/0 \
--net=host \
-v /var/lib/ceph:/var/lib/ceph \
-v /etc/ceph:/etc/ceph \
-e CEPH_DEMO_UID=qqq \
-e CEPH_DEMO_ACCESS_KEY=qqq \
-e CEPH_DEMO_SECRET_KEY=qqq \
-e CEPH_DEMO_BUCKET=qqq \
ceph/daemon \
demo

But I don't see anything at the network address, and I'm not exactly sure what I'm deploying and/or how to interact with it! Since you have a cluster I'm hoping you can give some tips, thanks!

michaelmoore10 commented 5 years ago

I think you need to set and IP address for MON_IP and define a subnet range for CEPH_PUBLIC_NETWORK. The 0.0.0.0's in those won't work.

vsoch commented 5 years ago

So I need to be on some dedicated instance, working from freebie wireless on my laptop isn't sufficient?

vsoch commented 5 years ago

Did you get a particular command to work? I'm still trying.

vsoch commented 5 years ago

okay made some (very tiny) progress, I found these issues:

And removed the bind to /etc/ceph and used my docker0 ipaddress:

docker run -d \
--name demo \
-e MON_IP=127.0.0.1 \
-e CEPH_NETWORK=172.17.0.1/24 \
-e CEPH_PUBLIC_NETWORK=172.17.0.1/24 \
--net=host \
-v /var/lib/ceph:/var/lib/ceph \
-e CEPH_DEMO_UID=qqq \
-e CEPH_DEMO_ACCESS_KEY=qqq \
-e CEPH_DEMO_SECRET_KEY=qqq \
-e CEPH_DEMO_BUCKET=qqq \
ceph/daemon \
demo

This at least generated more output in the logs

$ docker logs demo
creating /etc/ceph/ceph.client.admin.keyring
creating /etc/ceph/ceph.mon.keyring
creating /var/lib/ceph/bootstrap-osd/ceph.keyring
creating /var/lib/ceph/bootstrap-mds/ceph.keyring
creating /var/lib/ceph/bootstrap-rgw/ceph.keyring
creating /var/lib/ceph/bootstrap-rbd/ceph.keyring
monmaptool: monmap file /etc/ceph/monmap-ceph
monmaptool: set fsid to 8d1e395e-2105-44bd-97fc-05fcd2e798d1
monmaptool: writing epoch 0 to /etc/ceph/monmap-ceph (1 monitors)
importing contents of /var/lib/ceph/bootstrap-osd/ceph.keyring into /etc/ceph/ceph.mon.keyring
importing contents of /var/lib/ceph/bootstrap-mds/ceph.keyring into /etc/ceph/ceph.mon.keyring
importing contents of /var/lib/ceph/bootstrap-rgw/ceph.keyring into /etc/ceph/ceph.mon.keyring
importing contents of /var/lib/ceph/bootstrap-rbd/ceph.keyring into /etc/ceph/ceph.mon.keyring
importing contents of /etc/ceph/ceph.client.admin.keyring into /etc/ceph/ceph.mon.keyring
changed ownership of '/etc/ceph/ceph.client.admin.keyring' from root:root to ceph:ceph
changed ownership of '/etc/ceph/ceph.conf' from root:root to ceph:ceph
ownership of '/etc/ceph/ceph.mon.keyring' retained as ceph:ceph
changed ownership of '/etc/ceph/rbdmap' from root:root to ceph:ceph
changed ownership of '/var/lib/ceph/mgr/ceph-vanessa-ThinkPad-T460s/keyring' from root:root to ceph:ceph
ownership of '/var/lib/ceph/mgr/ceph-vanessa-ThinkPad-T460s' retained as ceph:ceph
changed ownership of '/var/lib/ceph/osd/ceph-0' from root:root to ceph:ceph
2018-11-15 16:18:51.616 7f7ed7b1b1c0 -1 bluestore(/var/lib/ceph/osd/ceph-0/block) _read_bdev_label failed to open /var/lib/ceph/osd/ceph-0/block: (2) No such file or directory
2018-11-15 16:18:51.616 7f7ed7b1b1c0 -1 bluestore(/var/lib/ceph/osd/ceph-0/block) _read_bdev_label failed to open /var/lib/ceph/osd/ceph-0/block: (2) No such file or directory
2018-11-15 16:18:51.616 7f7ed7b1b1c0 -1 bluestore(/var/lib/ceph/osd/ceph-0/block) _read_bdev_label failed to open /var/lib/ceph/osd/ceph-0/block: (2) No such file or directory
2018-11-15 16:18:51.632 7f7ed7b1b1c0 -1 bluestore(/var/lib/ceph/osd/ceph-0) _read_fsid unparsable uuid 
2018-11-15 16:18:51.632 7f7ed7b1b1c0 -1 bdev(0x562f99a88700 /var/lib/ceph/osd/ceph-0/block) unable to get device name for /var/lib/ceph/osd/ceph-0/block: (22) Invalid argument
2018-11-15 16:18:51.640 7f7ed7b1b1c0 -1 bdev(0x562f99a88a80 /var/lib/ceph/osd/ceph-0/block) unable to get device name for /var/lib/ceph/osd/ceph-0/block: (22) Invalid argument
2018-11-15 16:18:52.168 7f7ed7b1b1c0 -1 bdev(0x562f99a88700 /var/lib/ceph/osd/ceph-0/block) unable to get device name for /var/lib/ceph/osd/ceph-0/block: (22) Invalid argument
2018-11-15 16:18:52.168 7f7ed7b1b1c0 -1 bdev(0x562f99a88a80 /var/lib/ceph/osd/ceph-0/block) unable to get device name for /var/lib/ceph/osd/ceph-0/block: (22) Invalid argument
2018-11-15 16:18:52.740 7f7ed7b1b1c0 -1 bdev(0x562f99a88700 /var/lib/ceph/osd/ceph-0/block) unable to get device name for /var/lib/ceph/osd/ceph-0/block: (22) Invalid argument
2018-11-15 16:18:52.740 7f7ed7b1b1c0 -1 bdev(0x562f99a88a80 /var/lib/ceph/osd/ceph-0/block) unable to get device name for /var/lib/ceph/osd/ceph-0/block: (22) Invalid argument
changed ownership of '/var/lib/ceph/osd/ceph-0/bluefs' from root:root to ceph:ceph
changed ownership of '/var/lib/ceph/osd/ceph-0/keyring' from root:root to ceph:ceph
changed ownership of '/var/lib/ceph/osd/ceph-0/block' from root:root to ceph:ceph
changed ownership of '/var/lib/ceph/osd/ceph-0/mkfs_done' from root:root to ceph:ceph
changed ownership of '/var/lib/ceph/osd/ceph-0/magic' from root:root to ceph:ceph
changed ownership of '/var/lib/ceph/osd/ceph-0/whoami' from root:root to ceph:ceph
changed ownership of '/var/lib/ceph/osd/ceph-0/type' from root:root to ceph:ceph
changed ownership of '/var/lib/ceph/osd/ceph-0/ready' from root:root to ceph:ceph
changed ownership of '/var/lib/ceph/osd/ceph-0/ceph_fsid' from root:root to ceph:ceph
changed ownership of '/var/lib/ceph/osd/ceph-0/fsid' from root:root to ceph:ceph
changed ownership of '/var/lib/ceph/osd/ceph-0/kv_backend' from root:root to ceph:ceph
ownership of '/var/lib/ceph/osd/ceph-0' retained as ceph:ceph
starting osd.0 at - osd_data /var/lib/ceph/osd/ceph-0 /var/lib/ceph/osd/ceph-0/journal
2018-11-15 16:18:53.592 7f5aad2fd1c0 -1 bdev(0x5607f67d2700 /var/lib/ceph/osd/ceph-0/block) unable to get device name for /var/lib/ceph/osd/ceph-0/block: (22) Invalid argument
2018-11-15 16:18:53.876 7f5aad2fd1c0 -1 bdev(0x5607f67d2700 /var/lib/ceph/osd/ceph-0/block) unable to get device name for /var/lib/ceph/osd/ceph-0/block: (22) Invalid argument
2018-11-15 16:18:53.880 7f5aad2fd1c0 -1 bdev(0x5607f67d2a80 /var/lib/ceph/osd/ceph-0/block) unable to get device name for /var/lib/ceph/osd/ceph-0/block: (22) Invalid argument
2018-11-15 16:18:54.000 7f5aad2fd1c0 -1 osd.0 0 log_to_monitors {default=true}
pool 'rbd' created
pool 'cephfs_data' created

Still looking for some web interface... (and not sure about device name errors with invalid argument)

vsoch commented 5 years ago

oh wait, it's still doing things :)

new fs with metadata pool 3 and data pool 2
changed ownership of '/var/lib/ceph/mds/ceph-demo/keyring' from root:root to ceph:ceph
changed ownership of '/var/lib/ceph/mds/ceph-demo' from root:root to ceph:ceph
starting mds.demo at -
changed ownership of '/var/lib/ceph/radosgw/ceph-rgw.vanessa-ThinkPad-T460s/keyring' from root:root to ceph:ceph
ownership of '/var/lib/ceph/radosgw/ceph-rgw.vanessa-ThinkPad-T460s' retained as ceph:ceph
2018-11-15 16:18:59  /entrypoint.sh: Setting up a demo user...
{
    "user_id": "qqq",
    "display_name": "Ceph demo user",
    "email": "",
    "suspended": 0,
    "max_buckets": 1000,
    "auid": 0,
    "subusers": [],
    "keys": [
        {
            "user": "qqq",
            "access_key": "qqq",
            "secret_key": "qqq"
        }
    ],
    "swift_keys": [],
    "caps": [],
    "op_mask": "read, write, delete",
    "default_placement": "",
    "placement_tags": [],
    "bucket_quota": {
        "enabled": false,
        "check_on_raw": false,
        "max_size": -1,
        "max_size_kb": 0,
        "max_objects": -1
    },
    "user_quota": {
        "enabled": false,
        "check_on_raw": false,
        "max_size": -1,
        "max_size_kb": 0,
        "max_objects": -1
    },
    "temp_url_keys": [],
    "type": "rgw",
    "mfa_ids": []
}

{
    "user_id": "qqq",
    "display_name": "Ceph demo user",
    "email": "",
    "suspended": 0,
    "max_buckets": 1000,
    "auid": 0,
    "subusers": [],
    "keys": [
        {
            "user": "qqq",
            "access_key": "qqq",
            "secret_key": "qqq"
        }
    ],
    "swift_keys": [],
    "caps": [
        {
            "type": "buckets",
            "perm": "*"
        },
        {
            "type": "metadata",
            "perm": "*"
        },
        {
            "type": "usage",
            "perm": "*"
        },
        {
            "type": "users",
            "perm": "*"
        }
    ],
    "op_mask": "read, write, delete",
    "default_placement": "",
    "placement_tags": [],
    "bucket_quota": {
        "enabled": false,
        "check_on_raw": false,
        "max_size": -1,
        "max_size_kb": 0,
        "max_objects": -1
    },
    "user_quota": {
        "enabled": false,
        "check_on_raw": false,
        "max_size": -1,
        "max_size_kb": 0,
        "max_objects": -1
    },
    "temp_url_keys": [],
    "type": "rgw",
    "mfa_ids": []
}

2018-11-15 16:19:09  /entrypoint.sh: Creating bucket...
ERROR: Bucket 'qqq' does not exist
ERROR: S3 error: 404 (NoSuchBucket)
vsoch commented 5 years ago

boum! Got it!

image

vsoch commented 5 years ago

I'll put a log here of what I did.

Step 1. Docker Ip Address

You can use ifconfig to find the docker0 network address. Mine looks like this:

docker0   Link encap:Ethernet  HWaddr 02:42:52:32:97:63  
          inet addr:172.17.0.1  Bcast:172.17.255.255  Mask:255.255.0.0
          inet6 addr: fe80::42:52ff:fe32:9763/64 Scope:Link
          UP BROADCAST MULTICAST  MTU:1500  Metric:1
          RX packets:340 errors:0 dropped:0 overruns:0 frame:0
          TX packets:3270 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:1185211 (1.1 MB)  TX bytes:739874 (739.8 KB)

Step 2. Start Container

Then for the command, it's actually not going to work to bind some local config directory (/etc/ceph in their examples) so I dropped that. It also doesn't work to give some storage bucket (hence the S3 error no such bucket at the end). So... I just removed those things, and then provided the docker0 hostname:

docker run -d \
--name demo \
-e MON_IP=172.17.0.1 \
-e CEPH_NETWORK=172.17.0.1/24 \
-e CEPH_PUBLIC_NETWORK=172.17.0.1/24 \
--net=host \
-v /var/lib/ceph:/var/lib/ceph \
-e CEPH_DEMO_UID=qqq \
-e CEPH_DEMO_ACCESS_KEY=qqq \
-e CEPH_DEMO_SECRET_KEY=qqq \
ceph/daemon \
demo

Step 3. Inspect Container

To see what is going on, you can use docker inspect demo and docker logs demo, to show the above. The final working thing looks like:

$ docker logs demo
creating /etc/ceph/ceph.client.admin.keyring
creating /etc/ceph/ceph.mon.keyring
creating /var/lib/ceph/bootstrap-osd/ceph.keyring
creating /var/lib/ceph/bootstrap-mds/ceph.keyring
creating /var/lib/ceph/bootstrap-rgw/ceph.keyring
creating /var/lib/ceph/bootstrap-rbd/ceph.keyring
monmaptool: monmap file /etc/ceph/monmap-ceph
monmaptool: set fsid to 98701fa9-537f-423b-88dd-68c199ace14c
monmaptool: writing epoch 0 to /etc/ceph/monmap-ceph (1 monitors)
importing contents of /var/lib/ceph/bootstrap-osd/ceph.keyring into /etc/ceph/ceph.mon.keyring
importing contents of /var/lib/ceph/bootstrap-mds/ceph.keyring into /etc/ceph/ceph.mon.keyring
importing contents of /var/lib/ceph/bootstrap-rgw/ceph.keyring into /etc/ceph/ceph.mon.keyring
importing contents of /var/lib/ceph/bootstrap-rbd/ceph.keyring into /etc/ceph/ceph.mon.keyring
importing contents of /etc/ceph/ceph.client.admin.keyring into /etc/ceph/ceph.mon.keyring
changed ownership of '/etc/ceph/ceph.client.admin.keyring' from root:root to ceph:ceph
changed ownership of '/etc/ceph/ceph.conf' from root:root to ceph:ceph
ownership of '/etc/ceph/ceph.mon.keyring' retained as ceph:ceph
changed ownership of '/etc/ceph/rbdmap' from root:root to ceph:ceph
changed ownership of '/var/lib/ceph/mgr/ceph-vanessa-ThinkPad-T460s/keyring' from root:root to ceph:ceph
ownership of '/var/lib/ceph/mgr/ceph-vanessa-ThinkPad-T460s' retained as ceph:ceph
changed ownership of '/var/lib/ceph/osd/ceph-0' from root:root to ceph:ceph
2018-11-15 16:26:23.765 7f4c152fd1c0 -1 bluestore(/var/lib/ceph/osd/ceph-0/block) _read_bdev_label failed to open /var/lib/ceph/osd/ceph-0/block: (2) No such file or directory
2018-11-15 16:26:23.765 7f4c152fd1c0 -1 bluestore(/var/lib/ceph/osd/ceph-0/block) _read_bdev_label failed to open /var/lib/ceph/osd/ceph-0/block: (2) No such file or directory
2018-11-15 16:26:23.765 7f4c152fd1c0 -1 bluestore(/var/lib/ceph/osd/ceph-0/block) _read_bdev_label failed to open /var/lib/ceph/osd/ceph-0/block: (2) No such file or directory
2018-11-15 16:26:23.781 7f4c152fd1c0 -1 bluestore(/var/lib/ceph/osd/ceph-0) _read_fsid unparsable uuid 
2018-11-15 16:26:23.785 7f4c152fd1c0 -1 bdev(0x557c1289a700 /var/lib/ceph/osd/ceph-0/block) unable to get device name for /var/lib/ceph/osd/ceph-0/block: (22) Invalid argument
2018-11-15 16:26:23.789 7f4c152fd1c0 -1 bdev(0x557c1289aa80 /var/lib/ceph/osd/ceph-0/block) unable to get device name for /var/lib/ceph/osd/ceph-0/block: (22) Invalid argument
2018-11-15 16:26:24.337 7f4c152fd1c0 -1 bdev(0x557c1289a700 /var/lib/ceph/osd/ceph-0/block) unable to get device name for /var/lib/ceph/osd/ceph-0/block: (22) Invalid argument
2018-11-15 16:26:24.337 7f4c152fd1c0 -1 bdev(0x557c1289aa80 /var/lib/ceph/osd/ceph-0/block) unable to get device name for /var/lib/ceph/osd/ceph-0/block: (22) Invalid argument
2018-11-15 16:26:24.909 7f4c152fd1c0 -1 bdev(0x557c1289a700 /var/lib/ceph/osd/ceph-0/block) unable to get device name for /var/lib/ceph/osd/ceph-0/block: (22) Invalid argument
2018-11-15 16:26:24.909 7f4c152fd1c0 -1 bdev(0x557c1289aa80 /var/lib/ceph/osd/ceph-0/block) unable to get device name for /var/lib/ceph/osd/ceph-0/block: (22) Invalid argument
changed ownership of '/var/lib/ceph/osd/ceph-0/bluefs' from root:root to ceph:ceph
changed ownership of '/var/lib/ceph/osd/ceph-0/keyring' from root:root to ceph:ceph
changed ownership of '/var/lib/ceph/osd/ceph-0/block' from root:root to ceph:ceph
changed ownership of '/var/lib/ceph/osd/ceph-0/mkfs_done' from root:root to ceph:ceph
changed ownership of '/var/lib/ceph/osd/ceph-0/magic' from root:root to ceph:ceph
changed ownership of '/var/lib/ceph/osd/ceph-0/whoami' from root:root to ceph:ceph
changed ownership of '/var/lib/ceph/osd/ceph-0/type' from root:root to ceph:ceph
changed ownership of '/var/lib/ceph/osd/ceph-0/ready' from root:root to ceph:ceph
changed ownership of '/var/lib/ceph/osd/ceph-0/ceph_fsid' from root:root to ceph:ceph
changed ownership of '/var/lib/ceph/osd/ceph-0/fsid' from root:root to ceph:ceph
changed ownership of '/var/lib/ceph/osd/ceph-0/kv_backend' from root:root to ceph:ceph
ownership of '/var/lib/ceph/osd/ceph-0' retained as ceph:ceph
starting osd.0 at - osd_data /var/lib/ceph/osd/ceph-0 /var/lib/ceph/osd/ceph-0/journal
2018-11-15 16:26:25.717 7f2b2ecfc1c0 -1 bdev(0x55e1eaf46700 /var/lib/ceph/osd/ceph-0/block) unable to get device name for /var/lib/ceph/osd/ceph-0/block: (22) Invalid argument
2018-11-15 16:26:26.025 7f2b2ecfc1c0 -1 bdev(0x55e1eaf46700 /var/lib/ceph/osd/ceph-0/block) unable to get device name for /var/lib/ceph/osd/ceph-0/block: (22) Invalid argument
2018-11-15 16:26:26.029 7f2b2ecfc1c0 -1 bdev(0x55e1eaf46a80 /var/lib/ceph/osd/ceph-0/block) unable to get device name for /var/lib/ceph/osd/ceph-0/block: (22) Invalid argument
2018-11-15 16:26:26.145 7f2b2ecfc1c0 -1 osd.0 0 log_to_monitors {default=true}
pool 'rbd' created
pool 'cephfs_data' created
pool 'cephfs_metadata' created
new fs with metadata pool 3 and data pool 2
changed ownership of '/var/lib/ceph/mds/ceph-demo/keyring' from root:root to ceph:ceph
changed ownership of '/var/lib/ceph/mds/ceph-demo' from root:root to ceph:ceph
starting mds.demo at -
changed ownership of '/var/lib/ceph/radosgw/ceph-rgw.vanessa-ThinkPad-T460s/keyring' from root:root to ceph:ceph
ownership of '/var/lib/ceph/radosgw/ceph-rgw.vanessa-ThinkPad-T460s' retained as ceph:ceph
2018-11-15 16:26:31  /entrypoint.sh: Setting up a demo user...
{
    "user_id": "qqq",
    "display_name": "Ceph demo user",
    "email": "",
    "suspended": 0,
    "max_buckets": 1000,
    "auid": 0,
    "subusers": [],
    "keys": [
        {
            "user": "qqq",
            "access_key": "qqq",
            "secret_key": "qqq"
        }
    ],
    "swift_keys": [],
    "caps": [],
    "op_mask": "read, write, delete",
    "default_placement": "",
    "placement_tags": [],
    "bucket_quota": {
        "enabled": false,
        "check_on_raw": false,
        "max_size": -1,
        "max_size_kb": 0,
        "max_objects": -1
    },
    "user_quota": {
        "enabled": false,
        "check_on_raw": false,
        "max_size": -1,
        "max_size_kb": 0,
        "max_objects": -1
    },
    "temp_url_keys": [],
    "type": "rgw",
    "mfa_ids": []
}

{
    "user_id": "qqq",
    "display_name": "Ceph demo user",
    "email": "",
    "suspended": 0,
    "max_buckets": 1000,
    "auid": 0,
    "subusers": [],
    "keys": [
        {
            "user": "qqq",
            "access_key": "qqq",
            "secret_key": "qqq"
        }
    ],
    "swift_keys": [],
    "caps": [
        {
            "type": "buckets",
            "perm": "*"
        },
        {
            "type": "metadata",
            "perm": "*"
        },
        {
            "type": "usage",
            "perm": "*"
        },
        {
            "type": "users",
            "perm": "*"
        }
    ],
    "op_mask": "read, write, delete",
    "default_placement": "",
    "placement_tags": [],
    "bucket_quota": {
        "enabled": false,
        "check_on_raw": false,
        "max_size": -1,
        "max_size_kb": 0,
        "max_objects": -1
    },
    "user_quota": {
        "enabled": false,
        "check_on_raw": false,
        "max_size": -1,
        "max_size_kb": 0,
        "max_objects": -1
    },
    "temp_url_keys": [],
    "type": "rgw",
    "mfa_ids": []
}

Sree-0.1/.gitignore
Sree-0.1/README.md
Sree-0.1/app.py
Sree-0.1/snapshots/
Sree-0.1/snapshots/Configuration.png
Sree-0.1/snapshots/greenland.png
Sree-0.1/sree.cfg.sample
Sree-0.1/static/
Sree-0.1/static/buckets.html
Sree-0.1/static/config.html
Sree-0.1/static/css/
Sree-0.1/static/css/bootstrap.min.css
Sree-0.1/static/css/font-awesome.min.css
Sree-0.1/static/css/jquery.dataTables.min.css
Sree-0.1/static/css/style.css
Sree-0.1/static/css/validationEngine.jquery.css
Sree-0.1/static/image/
Sree-0.1/static/image/ceph-nano-logo-horizontal.svg
Sree-0.1/static/image/fontawesome-webfont.svg
Sree-0.1/static/image/fontawesome-webfont.ttf
Sree-0.1/static/image/fontawesome-webfont.woff
Sree-0.1/static/js/
Sree-0.1/static/js/base.js
Sree-0.1/static/js/config.json.sample
Sree-0.1/static/js/lib/
Sree-0.1/static/js/lib/aws-sdk.min.js
Sree-0.1/static/js/lib/bootstrap.min.js
Sree-0.1/static/js/lib/dataTable.bootstrap.js
Sree-0.1/static/js/lib/jquery-1.10.1.min.js
Sree-0.1/static/js/lib/jquery.dataTables.js
Sree-0.1/static/js/lib/jquery.form.js
Sree-0.1/static/js/lib/jquery.validationEngine-zh_CN.js
Sree-0.1/static/js/lib/jquery.validationEngine.js
Sree-0.1/static/js/lib/require.config.js
Sree-0.1/static/js/lib/require.js
Sree-0.1/static/js/lib/template.js
Sree-0.1/static/js/upload.js
Sree-0.1/static/objects.html
Sree-0.1/xmlparser.py
/sree /
/
/sree /
/
 * Running on http://0.0.0.0:5000/
demo.sh: line 275: ceph-rest-api: command not found
2018-11-15 16:26:41  /entrypoint.sh: SUCCESS
exec: PID 1458: spawning ceph --cluster ceph -w
exec: Waiting 1458 to quit

Step 5. View Container Interface

And then open browser to 0.0.0.0:5000 as instructed.

vsoch commented 5 years ago

@michaelmoore10 is that the right thing to work with?

vsoch commented 5 years ago

omg I made a bucket this is wicked!

image

michaelmoore10 commented 5 years ago

Congrats. Sorry I took so long to respond. I had to run a few errands and I missed your notes.

vsoch commented 5 years ago

Some more quick notes, because the storage API is very hard to find! I found it via looking at the configuration file:

$ docker exec demo cat /etc/ceph/ceph.conf
[global]
fsid = xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
mon initial members = vanessa-ThinkPad-T460s
mon host = 172.17.0.1
osd crush chooseleaf type = 0
osd journal size = 100
public network = 172.17.0.1/24
cluster network = 172.17.0.1/24
log file = /dev/null
osd pool default size = 1
osd max object name len = 256
osd max object namespace len = 64
osd data = /var/lib/ceph/osd/ceph-0
osd objectstore = bluestore

[client.rgw.vanessa-ThinkPad-T460s]
rgw dns name = vanessa-ThinkPad-T460s
rgw enable usage log = true
rgw usage log tick interval = 1
rgw usage log flush threshold = 1
rgw usage max shards = 32
rgw usage max user shards = 1
log file = /var/log/ceph/client.rgw.vanessa-ThinkPad-T460s.log
rgw frontends = civetweb  port=0.0.0.0:8080

[client.restapi]
public addr = 172.17.0.1:5000
restapi base url = /api/v0.1
restapi log level = warning
log file = /var/log/ceph/ceph-restapi.log

and this also reveals the API address, it's served at http://172.17.0.1:8080/.

image

and then I found the auth endpoint, and am very good at being denied very quickly!

image

Note that the rest api log was empty for me, and this configuration is generated in a script in the WORKDIR when you shell into the container called start_restapi.sh. I'm going to try this swift python client now and I'm hoping it just plugs into this thing fairly easily. Famous last words... heh.

vsoch commented 5 years ago

hey @michaelmoore10 some questions for you on swift. I'm following the steps here to plug in my demo user (qqq) with details shows above, and specifically I'm getting a typical "access denied" because I likely am using the wrong credentials or similar:

ClientException: Auth GET failed: http://172.17.0.1:8080/auth/ 403 Forbidden  [first 60 chars of response] b'{"Code":"AccessDenied","RequestId":"tx00000000000000000001e-'

Here are notes for what I've done so far (see bottom section, Development), not much other than create a client on init and then starting some functions to get/create containers there. I noticed that the "user" also has some namespace for an account and I'm not sure what that is, do you?

user = 'account_name:username'
michaelmoore10 commented 5 years ago

Let me take a look at our notes/docs here and see what I can figure out.

vsoch commented 5 years ago

I didn't know about swiftstack either, this seems super cool! :point_right: https://www.swiftstack.com/

vsoch commented 5 years ago

Just figured it out! I noticed that the "swift users" field was empty (and I'm pretty sure that's what I needed) so I looked into commands for creating users for that, and it looks like:

docker exec demo radosgw-admin user create --subuser="ceph:vanessa" --uid="vanessa" --display-name="Vanessa Saurus" --key-type=swift --access=full

and then the field is populated (here is the part that is different):

{
    "user_id": "ceph",
    "display_name": "Vanessa Saurus",
    "email": "",
    "suspended": 0,
    "max_buckets": 1000,
    "auid": 0,
    "subusers": [
        {
            "id": "ceph:vanessa",
            "permissions": "full-control"
        }
    ],
    "keys": [],
    "swift_keys": [
        {
            "user": "ceph:vanessa",
            "secret_key": "gpBCS9JtiADQPz5C35yVNd05ItrjXtryZI8aJEdn"
        }
    ],
...

Then the command to create the container didn't return an error. This doesn't mean I don't have it figured out because I don't see anything appearing in the interface, but I think I'm closer because I can interact with the endpoints of the API just as I should expect. I updated the notes that I linked previously if you want to see the full thing, I'll ping here again if I run into another issue.

vsoch commented 5 years ago

This was added, about a month ago, no review given, closing.