rpc-ceph is no longer being developed or tested. Please use the upstream ceph-ansible playbooks for any future deployments.
rpc-ceph
deploys Ceph as an RPC stand-alone platform in a uniform,
managed, and tested way to ensure version consistency and testing.
By adding automated tests, rpc-ceph
provides a way to manage tested
versions of ceph-ansible
used in RPC deployments.
rpc-ceph
is a thin wrapper around the ceph-ansible
project.
rpc-ceph
manages the versions of ansible and ceph-ansible
by providing:
ceph-ansible
and Ceph releases.fio
.Deploying rpc-ceph
uses boostrap.sh
, ceph-ansible
, default
group_vars
, and a pre-created playbook.
NOTE: Anything that can be configured with ceph-ansible
is configurable with
rpc-ceph
.
We do not recommend or use containers for rpc-ceph
production deployments.
Containers are setup and used as part of the run_tests.sh
(AIO) testing
strategy only. The default playbooks are not set up to build containers or
configure any of the required container specific roles.
The inventory should consist of the following:
rsyslog_all
host, pointing to an existing or new rsyslog logging server.benchmark_hosts
- the host on which to run benchmarking
(Read benchmark/README.md
for more).Configure the following inventory:
ansible_host
var for each host.dedicated_devices
for osd hosts.Configure a variables file including the following ceph-ansible
vars:
monitor_interface
public_network
cluster_network
osd_scenario
repo_server_interface
ceph-ansible
settings you want to configure.Set any override vars in playbooks/group_vars/host_group/overrides.yml, this allows:
Override any variables from ceph.conf
using ceph_conf_overrides_extra
or ceph_conf_overrides_<group>_extra
:
group_vars
to remain in place, and means you do not have to respecify any vars you aren't setting.ceph_conf_overrides_<group>_extra
var will override only vars for only the hosts in that group, with currently supported groups:
Run the bootstrap-ansible.sh
inside the scripts directory:
./scripts/bootstrap-ansible.sh
This configures ansible at a pre-tested version, creates a ceph-ansible
binary that points to the appropriate ansible-playbook binary, and clones the
required role repositories:
ceph-ansible
rsyslog_client
openstack-ansible-plugins
(ceph-ansible
uses the config template plugin from here).haproxy_server
rsyslog_server
Run the ceph-ansible
playbook from the playbooks directory:
ceph-ansible-playbook -i <link to your inventory file> playbooks/add-repo.yml -e @<link to your vars file>
ceph-ansible-playbook -i <link to your inventory file> playbooks/deploy-ceph.yml -e @<link to your vars file>
Run any additional playbooks from the playbook directory:
ceph-setup-logging.yml
will setup rsyslog client, ensure you have the appropriate rsyslog server setup, or other log shipping location, refer to: https://docs.openstack.org/openstack-ansible-rsyslog_client/latest/ for more detailsceph-keystone-rgw.yml
will setup required keystone users and endpoints for Ceph.ceph-rgw-haproxy.yml
will setup the HAProxy VIP for Ceph Rados GW. Ensure you specify haproxy_all
group in your inventory with the HAProxy hosts.ceph-rsyslog-server.yml
will setup rsyslog server on the rsyslog_all
hosts specified. NB If there is already an existing rsyslog server that you are connecting into, you should not run this.Your deployment should be successful.
NOTE: If there are any errors, troubleshoot as a standard ceph-ansible
deployment.
For MaaS integration, perform the following export commands.
Otherwise just use ./run_tests.sh
to build the AIO.
export PUBCLOUD_USERNAME=<username>
export PUBCLOUD_API_KEY=<api_key>
To run an AIO scenario for Ceph you can run the following export on a general1-8 or perf2-15 flavor instance, unless otherwise noted:
build_releasenotes
This will build the project releae notes using sphinx and place it in
the directory rpc-ceph/release/build/
functional: This is a base AIO for Ceph, includes MaaS testing, this runs on each commit, with the following components:
This job does not run the benchmarking playbooks.
bluestore: This is the same as the functional job but runs using bluestore, and 3 collocated OSD devices per osd host.
rpco_newton: An RPC-O newton integration test, that will deploy an RPC-O AIO, and integrate it with Ceph, followed by Tempest tests. This runs daily, as it takes a long time to build.
NB: This requires a perf2-15 instance.
rpco_pike This is the same as the rpco_newton job but built against the pike branch of RPC-O.
rpco_queens This is the same as the rpco_newton and rpco-pike jobs but built against the queens branch of RPC-O.
rpco_rocky This is the same as the rpco_newton and rpco-pike jobs but built against the rocky branch of RPC-O.
keystone_rgw: A basic keystone integration test, that will run on each commit. Utilizing the swift client to ensure Keystone integration is working.
Additionally this test runs the FIO and RGW benchmarking playbooks to ensure they work, but does not run the MaaS playbooks.