Ceph Ansible baremetal deployment

How many times you tried to install Ceph? How many fails with no reason?

All Ceph operator should agree with me when i say that Ceph installer doesn't really works as expected so far.

Yes, i'm talking about ceph-deploy and the main reason why i'm posting this guide about deploying Ceph with Ansible.

At this post, i will show how to install a Ceph cluster with Ansible on baremetal servers.

My configuration is as follows:

  1. 3 x ceph monitors 8GB of RAM each one

  2. 3 x OSD nodes 16GB of RAM and 3x100 GB of Disk

  3. 1 x RadosGateway node 8GB of RAM

First, download Ceph-Ansible playbooks

git clone https://github.com/ceph/ceph-ansible/
Cloning into 'ceph-ansible'...
remote: Counting objects: 5764, done.
remote: Compressing objects: 100% (38/38), done.
remote: Total 5764 (delta 7), reused 0 (delta 0), pack-reused 5726
Receiving objects: 100% (5764/5764), 1.12 MiB | 1.06 MiB/s, done.
Resolving deltas: 100% (3465/3465), done.
Checking connectivity... done.

Move to the newly created folder called ceph-ansible

cd ceph-ansible/

Copy sample vars files, we will configure our environment in these variable files.

Next step is configure the inventory with our servers, i don\'t really like use /etc/ansible/host file, i prefer create a new file per environment inside playbook\'s folder.

Create a file with the following content, use you own IPs to match your servers on the desired role inside the cluster

Test connectivity to you servers pinging them through Ansible ping module

Edit site.yml file, i will remove/comment mds nodes since i\'m not going to use them.

Edit main variable file, here we are going to configure our environment

Here we configure from where ceph packages are going to be installed, for now we use upstream code with the stable release Infernalis.

Configure interface on which monitor will be listening

Here we configure some OSD options, like journal size and what networks will be used by public and cluster data replication

Edit osds variable file

I will use auto discovery option to allow ceph ansible select empy or not used devices in my servers to create OSDs.

| Of course you can use other options, i\'ll highly suggest you to read variable comments, as they provide valuable information about usage. | We\'re ready to deploy ceph with ansible with our custom inventory_hosts file.

After a while, you will have a fully functional ceph cluster.

| Maybe you find some issues or bugs when running the playbooks. | There is a lot of efforts to fix issues on upstream repository. If a new bug is encountered, please, post a issue right here. | https://github.com/ceph/ceph-ansible/issues

You can check your cluster status with ceph -s. we can see all OSDs are up and pgs active/clean.

| We are going to do some tests. | Create a pool

Create a file big file

Upload the file to rados

Check om which placement groups your file is saved

Query the placement group where you file was uploaded, a similar output will prompts

That\'s all for now.

Regards, Eduardo Gonzalez

Last updated

Was this helpful?