(Since I didn't find a guide about how to draft a proposal, so plz forgive me if I submit it to a wrong place.)
Motivation
From my opinion, CSI is a standard interface, aiming at making it easier for every storage vendor to be integrated into different container orchestration systems. Generally we have done well enough, but it is still complicated for vendors to implement ControllerPublishVolume and ControllerUnpublishVolume request in some use cases.
Use case 1: Dedicated server
If users expect to deploy CO system (for example Kubernetes) on dedicated servers, they can specify different storage systems, that means that every storage system plugin need to make sure their volume can be attached to different kinds of servers and work well on different operation systems.
Use case 2: Private or hybrid cloud
If CO system is deployed on private cloud or hybrid cloud nodes, then users may specify some storage systems that are not supported or badly supported by this private cloud (VMware+Cinder). Since VASA provider can not attach Cinder volume to VMware VMs, VMware SP has to talk with Cinder directly to attach the volume. And in the end, there will be numerous solos, which definitely is what we don't want to see.
Goal
To solve the problem, we plan to design a standard library in CSI providing volume attaching for different storage vendors. For any storage system which wants to provide storage resource for
dedicated servers or VMs of private cloud, they can call this library to finish host-side volume discovery
and then mount the device path to container.
Proposed Design
As we know, there are a lot of storage protocols, such as iscsi, rbd, fc, smbfs and so forth, and some of them are implemented in different ways according to different system types(x86, s390, ppc64) and os types (linux, windows), so this library will communicate with kernel and expose a unified interface to different SPs.
API Object
The API object will have the following structure:
const (
// Platform type
PLATFORM_ALL = 'ALL'
PLATFORM_x86 = 'X86'
PLATFORM_S390 = 'S390'
PLATFORM_PPC64 = 'PPC64'
// Operation system type
OS_TYPE_ALL = 'ALL'
OS_TYPE_LINUX = 'LINUX'
OS_TYPE_WINDOWS = 'WIN'
// Device driver type
ISCSI = "ISCSI"
ISER = "ISER"
FIBRE_CHANNEL = "FIBRE_CHANNEL"
AOE = "AOE"
DRBD = "DRBD"
NFS = "NFS"
GLUSTERFS = "GLUSTERFS"
LOCAL = "LOCAL"
GPFS = "GPFS"
HUAWEISDSHYPERVISOR = "HUAWEISDSHYPERVISOR"
HGST = "HGST"
RBD = "RBD"
SCALEIO = "SCALEIO"
SCALITY = "SCALITY"
QUOBYTE = "QUOBYTE"
DISCO = "DISCO"
VZSTORAGE = "VZSTORAGE"
// A unified device path prefix
VOLUME_LINK_DIR = '/dev/disk/by-id/'
)
// Connector is an interface indicating what outside world can do with this
// library, notice that it is at very early stage right now.
type Connector interface {
GetConnectorProperties(multiPath, doLocalAttach bool) (*ConnectorProperties, error)
ConnectVolume(conn *ConnectionInfo) (string, error)
DisconnectVolume(conn *ConnectionInfo) (string, error)
GetDevicePath(volumeId string) (string, error)
}
// ConnectorProperties is a struct used to tell storage backend how to
// intialize connection of volume. Please notice that it is OPTIONAL.
type ConnectorProperties struct {
DoLocalAttach bool `json:"doLocalAttach"`
Platform string `json:"platform"`
OsType string `json:"osType"`
Ip string `json:"ip"`
Host string `json:"host"`
MultiPath bool `json:"multipath"`
Initiator string `json:"initiator"`
}
// ConnectionInfo is a structure for all properties of
// connection when connect a volume
type ConnectionInfo struct {
// the type of driver volume, such as iscsi, rbd and so on
DriverVolumeType string `json:"driverVolumeType"`
// Required parameters to connect volume and differs from DriverVolumeType.
// For example, for iscsi driver, see struct IsciConnectionData below.
// NOTICE that you have to convert it into a map.
ConnectionData map[string]interface{} `json:"data"`
}
type IscsiConnectionData struct {
// boolean indicating whether discovery was used
TragetDiscovered bool `json:"targetDiscovered"`
// the IQN of the iSCSI target
TargetIqn string `json:"targetIqn"`
// the portal of the iSCSI target
TargetPortal string `json:"targetPortal"`
// the lun of the iSCSI target
TargetLun string `json:"targetLun"`
// the uuid of the volume
VolumeId string `json:"volumeId"`
// the authentication details
AuthUsername string `json:"authUsername"`
AuthPassword string `json:"authPassword"`
}
(Since I didn't find a guide about how to draft a proposal, so plz forgive me if I submit it to a wrong place.)
Motivation
From my opinion, CSI is a standard interface, aiming at making it easier for every storage vendor to be integrated into different container orchestration systems. Generally we have done well enough, but it is still complicated for vendors to implement ControllerPublishVolume and ControllerUnpublishVolume request in some use cases.
Use case 1: Dedicated server
If users expect to deploy CO system (for example Kubernetes) on dedicated servers, they can specify different storage systems, that means that every storage system plugin need to make sure their volume can be attached to different kinds of servers and work well on different operation systems.
Use case 2: Private or hybrid cloud
If CO system is deployed on private cloud or hybrid cloud nodes, then users may specify some storage systems that are not supported or badly supported by this private cloud (VMware+Cinder). Since VASA provider can not attach Cinder volume to VMware VMs, VMware SP has to talk with Cinder directly to attach the volume. And in the end, there will be numerous solos, which definitely is what we don't want to see.
Goal
To solve the problem, we plan to design a standard library in CSI providing volume attaching for different storage vendors. For any storage system which wants to provide storage resource for dedicated servers or VMs of private cloud, they can call this library to finish host-side volume discovery and then mount the device path to container.
Proposed Design
As we know, there are a lot of storage protocols, such as iscsi, rbd, fc, smbfs and so forth, and some of them are implemented in different ways according to different system types(x86, s390, ppc64) and os types (linux, windows), so this library will communicate with kernel and expose a unified interface to different SPs.
API Object
The API object will have the following structure:
References