LINBIT / linstor-client

Python client for LINSTOR
https://docs.linbit.com/docs/linstor-guide/
GNU General Public License v3.0
22 stars 10 forks source link

linstor --machine-readable shows different allocated size #11

Closed kvaps closed 5 years ago

kvaps commented 5 years ago

I have volume-definition with 2G disk

# linstor vd l -R one-vm-53-disk-4
╭───────────────────────────────────────────────────────────╮
┊ ResourceName     ┊ VolumeNr ┊ VolumeMinor ┊ Size  ┊ State ┊
╞┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄╡
┊ one-vm-53-disk-4 ┊ 0        ┊ 1053        ┊ 2 GiB ┊ ok    ┊
╰───────────────────────────────────────────────────────────╯

When I run list-volumes, I see Allocated: 70.24 MiB. Is that real usage?

# linstor r lv -r one-vm-53-disk-4
╭────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
┊ Node ┊ Resource         ┊ StoragePool ┊ VolumeNr ┊ MinorNr ┊ DeviceName    ┊ Allocated ┊ InUse  ┊    State ┊
╞┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄╡
┊ m1c7 ┊ one-vm-53-disk-4 ┊ thindata    ┊ 0        ┊ 1053    ┊ /dev/drbd1053 ┊ 70.24 MiB ┊ InUse  ┊ UpToDate ┊
┊ m1c8 ┊ one-vm-53-disk-4 ┊ thindata    ┊ 0        ┊ 1053    ┊ /dev/drbd1053 ┊ 70.24 MiB ┊ Unused ┊ UpToDate ┊
╰────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

But if I try to parse json, I see same 2G instead:

# linstor -m r l -r one-vm-53-disk-4 | jq '.[].resources[].vlms[].allocated'
2097640
2097640

Where can I take these 70.24 MiB? using --machine-readable flag?

ghernadi commented 5 years ago

We had to break the linstor API when introducing the layer concept to Linstor. Because of this we introduced a compatibility mode in the linstor client (which is enabled by default). This compatibility mode takes the data from the new response format and rebuilds the old format. By doing this, we copied apparently the allocated_size from the wrong place.

linstor -m --output-version v1 r l will give you the output format without compatibility mode. There you have with | jq '.[].resources[].vlms[].allocated_size your 70.24 MiB (or rather somthing like 71926 as linstor gives you the sizes in KiB).

We will fix the issue for the compatibility mode.

However, if you look closely in the --output-version v1 output, you will see information of that resource and also for all of its volumes for every used layer. Both, drbd and storage report allocated_size the same size as if they were not thinly provisioned (that means, the same as the volume definition's size). This is questionable if we want to fix this, as these are the data we receive from the underlying tools (blockdev --getsize64 <device>, lvs -o lv_size, ...) The only chance to get your desired 70.24 MiB is by taking those "fat"-allocated sizes and multiply them with the lvs -o data_percent. Although this is in best case a rough estimation, this is the best Linstor can do.

kvaps commented 5 years ago

OK, thanks for explanation!

kvaps commented 5 years ago

But there is still nothing about 70 MiB, look it by yourself:

linstor r l -r one-vm-53-disk-4  
╭────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
┊ Node ┊ Resource         ┊ StoragePool ┊ VolumeNr ┊ MinorNr ┊ DeviceName    ┊ Allocated ┊ InUse  ┊    State ┊
╞┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄╡
┊ m1c7 ┊ one-vm-53-disk-4 ┊ thindata    ┊ 0        ┊ 1053    ┊ /dev/drbd1053 ┊ 70.24 MiB ┊ InUse  ┊ UpToDate ┊
┊ m1c8 ┊ one-vm-53-disk-4 ┊ thindata    ┊ 0        ┊ 1053    ┊ /dev/drbd1053 ┊ 70.24 MiB ┊ Unused ┊ UpToDate ┊
╰────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
linstor -m --output-version v1 r l -r one-vm-53-disk-4  
[
  {
    "resource_states": [
      {
        "vlm_states": [
          {
            "disk_state": "UpToDate", 
            "vlm_nr": 0
          }
        ], 
        "in_use": false, 
        "rsc_name": "one-vm-53-disk-4", 
        "node_name": "m1c8"
      }, 
      {
        "vlm_states": [
          {
            "disk_state": "UpToDate", 
            "vlm_nr": 0
          }
        ], 
        "in_use": true, 
        "rsc_name": "one-vm-53-disk-4", 
        "node_name": "m1c7"
      }
    ], 
    "resources": [
      {
        "vlms": [
          {
            "layer_data": [
              {
                "layer_type": 1, 
                "drbd": {
                  "meta_disk": "", 
                  "device_path": "/dev/drbd1053", 
                  "backing_device": "/dev/data/one-vm-53-disk-4_00000", 
                  "allocated_size": 2097640, 
                  "drbd_vlm_dfn": {
                    "rsc_name_suffix": "one-vm-53-disk-4", 
                    "vlm_nr": 0, 
                    "minor": 1053
                  }, 
                  "usable_size": 2097152, 
                  "disk_state": ""
                }
              }, 
              {
                "layer_type": 3, 
                "storage": {
                  "device_path": "/dev/data/one-vm-53-disk-4_00000", 
                  "vlm_nr": 0, 
                  "provider_kind": 3, 
                  "allocated_size": 2101248, 
                  "usable_size": 2097640, 
                  "lvm_thin": {}, 
                  "disk_state": "[]"
                }
              }
            ], 
            "stor_pool_dfn_uuid": "27ba4c4a-267a-4734-ad2a-adabdd52251c", 
            "device_path": "/dev/drbd1053", 
            "vlm_nr": 0, 
            "stor_pool_name": "thindata", 
            "stor_pool_uuid": "4dcb3239-2133-44ec-8b52-d628e33bed42", 
            "stor_pool_props": [
              {
                "value": "data", 
                "key": "StorDriver/LvmVg"
              }, 
              {
                "value": "thindata", 
                "key": "StorDriver/ThinPool"
              }
            ], 
            "vlm_uuid": "e31b9e67-1835-49f8-bc2f-a16261b3ebab", 
            "vlm_dfn_uuid": "ecef7a82-8a55-401e-814a-d6046217d0a2", 
            "usable_size": 2097152, 
            "provider_kind": 3
          }
        ], 
        "node_uuid": "6eda5cd1-091e-4f00-b459-99cda8382ec6", 
        "uuid": "7c5dccbb-5b06-4d23-b0fa-88423f4d56a1", 
        "node_name": "m1c7", 
        "layer_object": {
          "layer_type": 1, 
          "rsc_name_suffix": "", 
          "drbd": {
            "peers_slots": 7, 
            "drbd_rsc_dfn": {
              "peers_slots": 7, 
              "al_stripes": 1, 
              "down": false, 
              "rsc_name_suffix": "one-vm-53-disk-4", 
              "secret": "a6pFFOoplT4Z/UUqZ4aJ", 
              "transport_type": "IP", 
              "al_size": 32, 
              "port": 7049
            }, 
            "al_stripes": 1, 
            "drbd_vlms": [
              {
                "meta_disk": "", 
                "device_path": "/dev/drbd1053", 
                "backing_device": "/dev/data/one-vm-53-disk-4_00000", 
                "allocated_size": 2097640, 
                "drbd_vlm_dfn": {
                  "rsc_name_suffix": "one-vm-53-disk-4", 
                  "vlm_nr": 0, 
                  "minor": 1053
                }, 
                "usable_size": 2097152, 
                "disk_state": ""
              }
            ], 
            "node_id": 1, 
            "flags": 0, 
            "al_size": 32
          }, 
          "id": 272, 
          "children": [
            {
              "layer_type": 3, 
              "rsc_name_suffix": "", 
              "storage": {
                "storage_vlms": [
                  {
                    "device_path": "/dev/data/one-vm-53-disk-4_00000", 
                    "vlm_nr": 0, 
                    "provider_kind": 3, 
                    "allocated_size": 2101248, 
                    "usable_size": 2097640, 
                    "lvm_thin": {}, 
                    "disk_state": "[]"
                  }
                ]
              }, 
              "id": 273
            }
          ]
        }, 
        "props": [
          {
            "value": "thindata", 
            "key": "AutoSelectedStorPoolName"
          }, 
          {
            "value": "thindata", 
            "key": "StorPoolName"
          }
        ], 
        "rsc_dfn_uuid": "2f52e627-ef06-4fe2-9158-59e72e1997bc", 
        "name": "one-vm-53-disk-4"
      }, 
      {
        "vlms": [
          {
            "layer_data": [
              {
                "layer_type": 1, 
                "drbd": {
                  "meta_disk": "", 
                  "device_path": "/dev/drbd1053", 
                  "backing_device": "/dev/data/one-vm-53-disk-4_00000", 
                  "allocated_size": 2097640, 
                  "drbd_vlm_dfn": {
                    "rsc_name_suffix": "one-vm-53-disk-4", 
                    "vlm_nr": 0, 
                    "minor": 1053
                  }, 
                  "usable_size": 2097152, 
                  "disk_state": ""
                }
              }, 
              {
                "layer_type": 3, 
                "storage": {
                  "device_path": "/dev/data/one-vm-53-disk-4_00000", 
                  "vlm_nr": 0, 
                  "provider_kind": 3, 
                  "allocated_size": 2101248, 
                  "usable_size": 2097640, 
                  "lvm_thin": {}, 
                  "disk_state": "[]"
                }
              }
            ], 
            "stor_pool_dfn_uuid": "27ba4c4a-267a-4734-ad2a-adabdd52251c", 
            "device_path": "/dev/drbd1053", 
            "vlm_nr": 0, 
            "stor_pool_name": "thindata", 
            "stor_pool_uuid": "5c2b2a61-de90-47ba-aa1a-d6477a965b37", 
            "stor_pool_props": [
              {
                "value": "data", 
                "key": "StorDriver/LvmVg"
              }, 
              {
                "value": "thindata", 
                "key": "StorDriver/ThinPool"
              }
            ], 
            "vlm_uuid": "6207cb27-742c-475e-87b5-d6ac76815e0a", 
            "vlm_dfn_uuid": "ecef7a82-8a55-401e-814a-d6046217d0a2", 
            "usable_size": 2097152, 
            "provider_kind": 3
          }
        ], 
        "node_uuid": "4b9cb67a-88dc-4404-b8e3-df3710ce562b", 
        "uuid": "e1e93add-2657-4aa8-8800-2849d9808224", 
        "node_name": "m1c8", 
        "layer_object": {
          "layer_type": 1, 
          "rsc_name_suffix": "", 
          "drbd": {
            "peers_slots": 7, 
            "drbd_rsc_dfn": {
              "peers_slots": 7, 
              "al_stripes": 1, 
              "down": false, 
              "rsc_name_suffix": "one-vm-53-disk-4", 
              "secret": "a6pFFOoplT4Z/UUqZ4aJ", 
              "transport_type": "IP", 
              "al_size": 32, 
              "port": 7049
            }, 
            "al_stripes": 1, 
            "drbd_vlms": [
              {
                "meta_disk": "", 
                "device_path": "/dev/drbd1053", 
                "backing_device": "/dev/data/one-vm-53-disk-4_00000", 
                "allocated_size": 2097640, 
                "drbd_vlm_dfn": {
                  "rsc_name_suffix": "one-vm-53-disk-4", 
                  "vlm_nr": 0, 
                  "minor": 1053
                }, 
                "usable_size": 2097152, 
                "disk_state": ""
              }
            ], 
            "node_id": 0, 
            "flags": 0, 
            "al_size": 32
          }, 
          "id": 266, 
          "children": [
            {
              "layer_type": 3, 
              "rsc_name_suffix": "", 
              "storage": {
                "storage_vlms": [
                  {
                    "device_path": "/dev/data/one-vm-53-disk-4_00000", 
                    "vlm_nr": 0, 
                    "provider_kind": 3, 
                    "allocated_size": 2101248, 
                    "usable_size": 2097640, 
                    "lvm_thin": {}, 
                    "disk_state": "[]"
                  }
                ]
              }, 
              "id": 267
            }
          ]
        }, 
        "props": [
          {
            "value": "thindata", 
            "key": "AutoSelectedStorPoolName"
          }, 
          {
            "value": "thindata", 
            "key": "StorPoolName"
          }
        ], 
        "rsc_dfn_uuid": "2f52e627-ef06-4fe2-9158-59e72e1997bc", 
        "name": "one-vm-53-disk-4"
      }
    ]
  }
]
ghernadi commented 5 years ago

The only way I can get a similar output like you (with the missing .vlms[].allocated_size field) is when I issue the mentioned linstor command while the satellite is offline.

For thinly provisioned volumes, all linstor resource list or list-volumes fetch the corresponding thin pools for the most upToDate data (including the data_percent output). If the satellite is offline, that data is considered as outdated, which seems to cause the missing .vlms[].allocated_size.

If you are still missing that field when the satellite is online, then I would be interested in what versions of linstor-controller, linstor-satellite and linstor-client you are using.

kvaps commented 5 years ago

All satellites are online. I'm using 0.9.5 version for controller and satellites and 0.9.2 for linstor-client and python-linstor. I'll try to upgrade to latest version now.

kvaps commented 5 years ago

OK, I found the difference between resource list and resource list-volumes commands. Simple resource list does not add allocated_size to volumes, resource list-volumes does. Sorry, my bad.

ghernadi commented 5 years ago

To be honest, this difference also surprised me a bit. However, after investigating a bit, the reason for this is historical growth. resource list simply includes also the volumes, because it was easier for us to serialize it, the client (in the human-readable mode) simply skips the volumes information. For list-volumes however we do need the data for the volumes as accurate as possible, therefore we had to implement this live-fetching of the allocated space of thin volumes.

I'm glad that this issue is resolved, we will still discuss internally if we want to do something about this difference or not. Thanks for finding this.