Installs and configures Ceph, a distributed network storage and filesystem designed to provide excellent performance, reliability, and scalability.
The current version is focused towards deploying Monitors and OSD on Ubuntu.
For documentation on how to use this cookbook, refer to the USAGE section.
For help, use Gitter chat, mailing-list or issues
>= 11.6.0
Tested as working:
The ceph cookbook requires the following cookbooks from Chef:
Ceph cluster design is beyond the scope of this README, please turn to the public wiki, mailing lists, visit our IRC channel, or contact Inktank:
http://ceph.com/docs/master http://ceph.com/resources/mailing-list-irc/ http://www.inktank.com/
This cookbook can be used to implement a chosen cluster design. Most of the configuration is retrieved from node attributes, which can be set by an environment or by a wrapper cookbook. A basic cluster configuration will need most of the following attributes:
node['ceph']['config']['fsid']
- the cluster UUIDnode['ceph']['config]'['global']['public network']
- a CIDR specification of the public networknode['ceph']['config]'['global']['cluster network']
- a CIDR specification of a separate cluster replication networknode['ceph']['config]'['global']['rgw dns name']
- the main domain of the radosgw daemonMost notably, the configuration does NOT need to set the mon_initial_members
, because the cookbook does a node search to find other mons in the same environment.
The other set of attributes that this recipe needs is node['ceph']['osd_devices']
, which is an array of OSD definitions, similar to the following:
ceph-disk-prepare
To automate setting several of these node attributes, it is recommended to use a policy wrapper cookbook. This allows the ability to use Chef Server cookbook versions along with environment version restrictions to roll out configuration changes in an ordered fashion.
It also can help with automating some settings. For example, a wrapper cookbook could peek at the list of harddrives that ohai has found and populate node['ceph']['osd_devices'] accordingly, instead of manually typing them all in:
node.override['ceph']['osd_devices'] = node['block_device'].each.reject{ |name, data| name !~ /^sd[b-z]/}.sort.map { |name, data| {'journal' => "/dev/#{name}"} }
For best results, the wrapper cookbook's recipe should be placed before the Ceph cookbook in the node's runlist. This will ensure that any attributes are in place before the Ceph cookbook runs and consumes those attributes.
Ceph monitor nodes should use the ceph-mon role.
Includes:
Ceph metadata server nodes should use the ceph-mds role.
Includes:
Ceph OSD nodes should use the ceph-osd role
Includes:
Ceph Rados Gateway nodes should use the ceph-radosgw role
node['ceph']['search_environment']
- a custom Chef environment to search when looking for mon nodes. The cookbook defaults to searching the current environment
node['ceph']['branch']
- selects whether to install the stable, testing, or dev version of Ceph
node['ceph']['version']
- install a version of Ceph that is different than the cookbook default. If this is changed in a wrapper cookbook, some repository urls may also need to be replaced, and they are found in attributes/repo.rb. If the branch attribute is set to dev, this selects the gitbuilder branch to install
node['ceph']['extras_repo']
- whether to install the ceph extras repo. The tgt recipe requires this
node['ceph']['config']['fsid']
- the cluster UUID
node['ceph']['config']['global']['public network']
- a CIDR specification of the public network
node['ceph']['config']['global']['cluster network']
- a CIDR specification of a separate cluster replication network
node['ceph']['config']['config-sections']
- add to this hash to add extra config sections to the ceph.conf
node['ceph']['user_pools']
- an array of pool definitions, with attributes name
, pg_num
and create_options
(optional), that are automatically created when a monitor is deployed
node['ceph']['config']['mon']
- a hash of settings to save in ceph.conf in the [mon] section, such as 'mon osd nearfull ratio' => '0.70'
node['ceph']['osd_devices']
- an array of OSD definitions for the current nodenode['ceph']['config']['osd']
- a hash of settings to save in ceph.conf in the [osd] section, such as 'osd max backfills' => 2
node['ceph']['config']['osd']['osd crush location']
- this attribute can be set on a per-node basis to maintain Crush map locationsnode['ceph']['config']['mds']
- a hash of settings to save in ceph.conf in the [mds] section, such as 'mds cache size' => '100000'
node['ceph']['cephfs_mount']
- where the cephfs recipe should mount CephFSnode['ceph']['cephfs_use_fuse']
- whether the cephfs recipe should use the fuse cephfs client. It will default to heuristics based on the kernel versionnode['ceph']['radosgw']['api_fqdn']
- what vhost to configure in the web servernode['ceph']['radosgw']['admin_email']
- the admin email address to configure in the web servernode['ceph']['radosgw']['rgw_addr']
- the web server's bind address, such as *:80node['ceph']['radosgw']['rgw_port']
- if set, connects to the radosgw fastcgi over this port instead of a unix socketnode['ceph']['radosgw']['webserver_companion']
- defaults to 'apache2', but it can be set to 'civetweb', or to false in order to leave it unconfigurednode['ceph']['radosgw']['path']
- where to save the s3gw.fcgi filenode['ceph']['config']['global']['rgw dns name']
- the main domain of the radosgw daemon, to calculate the bucket name from a subdomainThe ceph_client LWRP provides an easy way to construct a Ceph client key. These keys are needed by anything that needs to talk to the Ceph cluster, including RadosGW, CephFS, and RBD access.
{ 'mon' => 'allow r', 'osd' => 'allow r' }
client.#{name}.#{hostname}
/etc/ceph/ceph.client.#{name}.#{hostname}.keyring
if as_keyring
and /etc/ceph/ceph.client.#{name}.#{hostname}.secret
if not as_keyring
The ceph_cephfs LWRP provides an easy way to mount CephFS. It will automatically create a Ceph client key for the machine and mount CephFS to the specified location. If the kernel client is used, instead of the fuse client, a pre-existing subdirectory of CephFS can be mounted instead of the root.
The ceph_pool LWRP provides an easy way to create and delete Ceph pools.
It assumes that connectivity to the cluster is setup and that admin credentials are available from default locations, e.g. /etc/ceph/ceph.client.admin.keyring.
This cookbook requires a style guide for all contributions. Travis will automatically verify that every Pull Request follows the style guide.
eval "$(chef shell-init bash)"
bundle install
bundle exec rake style
This cookbook uses Test Kitchen to verify functionality. A Pull Request can't be merged if it causes any of the test configurations to fail.
eval "$(chef shell-init bash)"
bundle install
bundle exec kitchen test aio-debian-74
bundle exec kitchen test aio-ubuntu-1204
bundle exec kitchen test aio-ubuntu-1404
Author: Kyle Bader kyle.bader@dreamhost.com
Copyright 2013, DreamHost Web Hosting and Inktank Storage Inc.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.