Closed ghost closed 7 years ago
I think you are a little confused by the terminology. :) There is no "Pool vs LVM". We try to use a consistent terminology here and it is important to clarify this a little:
storage driver
: For example, LVM
is a storage driver. A storage driver
in a more general sense will usually consist of a kernel API which is accessible via a set of userspace tools. storage drivers
allow for the creation and administration of all kinds of storage entities
; storage pools
and storage volumes
being two examples. Other examples of storage driver
s are BTRFS
, ZFS
, and, if we allow for a more free interpretation of the concept, plain old directory
structures.storage pool
: A storage pool
is implemented by a storage driver
. In abstract terms it can be seen as a resource that will usually consist of a bunch of clearly delineated sub-resources. A storage pool thereby can be seen as a separate storage entity (e.g. a volume group
as in the LVM
example) distinct from the entities it is made up of (e.g. a logical volume
as in the LVM
example). But it can also be made up of the same storage entities (e.g. ZFS
datasets
or BTRFS
subvolumes
where storage pool
and its constituting storage volumes
are made of of the same type of storage entities).storage volume
: A storage volume
is usually the smallest constituent of a storage pool
that can be seen as its own clearly delineated storage entity. For example, a logical volume
will usually be the smallest constituent of a volume group
. Here, the volume group
will usually be the storage pool
and the logical volume
the storage volume
.Having clarified all that and looking at how LXD
is using these concepts in practice we can see that our set of currently supported storage drivers
(currently one of {BTRFS,DIR,LVM,ZFS}
) each allow for the creation of separate storage pools
and each of those pools will contain a bunch of separate storage volumes
. Each of those storage volumes
are kept clearly separate. This specifically means that the only way these storage volumes
can affect each other is by:
storage pools
. For example, BTRFS
storage volumes
(so-called subvolumes
) will share the mount options set on the storage pool
that they are part of.storage pool
. This competition can be limited or regulated if the storage driver
used to implement the pool provides an appropriate API (e.g. qgroups
for BTRFS
and refquota
with ZFS
).(Note that the two points mentioned above even apply in the plan directory
storage driver
case.)
In terms of safety you should be fine in either way: putting containers in different storage volumes
located in the same storage pool
(e.g. for LVM
putting two containers in two separate logical volumes
on the same volume group
) or putting them in two separate storage volumes
on different storage pools
(e.g. for LVM
putting two containers in two separate logical volumes
in different volume groups
).
As for the question of whether to use multiple storage pools
. This is highly dependent on your workload and your requirements. For example, if you have a bunch of non-performance critical containers that run non-io intensive tasks and sit around idle most of the time it might make sense to create a simple storage pool
that uses the DIR
storage driver
and simply puts each container into a separate directory. On the other hand you might, on the same machine, have a bunch of io-intensive containers that write massive amounts of data on disk constantly. For these you might want to create a storage pool
using the ZFS
or BTRFS
storage driver
and locate the pool on an SSD
disk so you can make full use of the power of a copy-on-write (COW
) storage driver
which also supports almost instantaneous snapshots and (hopefully) linear performance under stress.
For the last point, we do currently not have any plans to implement a GUI
ourselves.
@stgraber, any last words or corrections? :)
Nope, sounds all good. Including the GUI bit. For GUI there are a couple of options for (pretty minimal) web GUI written in javascript or going overkill and using OpenStack.
it certainly shouldn't be too difficult for someone to write a GUI client for the LXD API and it could even use some of the code from those javascript web GUIs if it was written using something like electron.
Brauner, you help to clarify the security of storage pools perfectly. What I was trying to gather which was the best configuration to segregate our containers storage using LXD (we had used separate logical volumes to segregate our LXC containers). Your insights have helped to address this concern. Thank You!
The LXD project has huge implications and is really exciting! At the heart of it, I see a movement to make Linux containers more secure and simpler to manage. I'd like to thank you folks for making things overall better :-).
FYI: The only functionalities that are not included, which we will be building is for remote scheduled backups with hard links, and a single place to monitor multiple LXD host. Yes, I already checked out WebGUI. The interface was geared towards managing a single LXD host, whereas we are want an overview of our server farm, which was previously LXC but I am migrating to LXD.
There is no LXD documentation to address the security of pools (that I have been able to find). Are LXD containers stored in the same pool secure from one another?
Is an LXD container stored in the same pool as secure as having each LXC in it's own LV? Should server managers use LV and/or multiple LXD pools?
Also, the REST API allows for container management remotely through a GUI. Is there any incentive for the LXD maintainer to build an LXD "HyperV" interface (similar to Windows Server). It would be nice to see the status of a server farm without needing to build something. Solutions like OpenStack seem like overkill to just get have a read out of memory and hard drive status of containers.
e.g. (screenshot) https://www.5nine.com/5nine-manager-for-hyper-v-free.aspx