ceph: Prepare the environment

Contents

The goal of this guide is to serve as a precursor to the installation of ceph nautilus on CentOS 7.  This process was completed on a virtual environment and has been reproduced several times before this writing.   Ceph is an open-source software storage platform that replicates data and makes it fault-tolerant. A good definition and intro to ceph may be found in the Ceph documentation.

Ceph Environment Overview

For the simplicity of this guide, we will have only one monitor and three osd servers.  The hardware specifications are fairly minimal as is the installation of CentOS 7, minimal with very few extras installed. Each of the hard drives are 32 GB each, should be much larger for a production environment.

fqdnpublic ipcluster ipdisksmemcpu
mon1.it-megocollector.com10.10.0.100/24192.168.1.100/24121
osd2.it-megocollector.com10.10.0.101/24192.168.1.101/244 (sdb sdc sdd)21
osd3.it-megocollector.com10.10.0.102/24192.168.1.102/244 (sdb sdc sdd)21
osd4.it-megocollector.com10.10.0.103/24192.168.1.103/244 (sdb sdc sdd)21

When complete there should be four servers with fully qualified domain names (“fqdn”) host names that are registered in your local DNS that each contain a user named ceph that is a member of sudoers and can ssh into each of the other servers without a password.

Nework

On each of the four servers, ensure that there are two network adapters.  One adapter for the the public IP and the other adapter for the cluster IP.  The public IP will need a gateway and point to a DNS server.  The cluster IP will only be an IP address with NO gateway and NO DNS server.

If no DNS server, edit the /etc/hosts.  This may do the trick, but that is not how this was tested.

cat << 'EOF' >> /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
10.10.0.100 mon1 mon1.it.megocollector.com
10.10.0.101 osd1 osd1.it.megocollector.com
10.10.0.102 osd2 osd2.it.megocollector.com
10.10.0.103 osd3 osd3.it.megocollector.com
EOF

Hostname

Set the hostname on each of the four servers, substituting the appropriate fqdn as seen in the example below.

hostnamectl set-hostname mon1.it.megocollector.com --static

SeLinux

Until there is a solution, permissive is better than disable.  The following two lines will put SeLinux into permissive mode.

setenforce 0
sed -i --follow-symlinks 's/\(^SELINUX=\).*$/\1permissive/g' /etc/selinux/config

Firewall

For this example, let’s disable it.  An ansible script from an article written earlier today will re-enable with appropriate ports open.

systemctl stop firewalld
systemctl disable firewalld

After executing the ansible-playbook, the firewall rules were added.  These are the firewall rules added to the firewall as found on a fresh install.

[ceph@mon1 ceph-ansible]$ sudo firewall-cmd --list-services
ceph ceph-mon dhcpv6-client ssh
[ceph@mon1 ceph-ansible]$ sudo firewall-cmd --list-ports
80/tcp 2003/tcp 4505/tcp 4506/tcp 6800-7300/tcp 6789/tcp 8443/tcp 9100/tcp 9283/tcp 3000/tcp 9092/tcp 9093/tcp 9094/tcp 9094/udp 8080/tcp

Create ceph user

This user must be able to sudo and access without password.

useradd ceph
passwd ceph
cat << 'EOF' > /etc/sudoers.d/ceph
ceph ALL=(ALL) NOPASSWD: ALL
EOF

Create Keys

Access keys so that ceph user can ssh without prompt for password.  For this to work, change user to ceph. The ‘-‘ will put the ceph user in the /home/ceph directory.  Then run the commands.

su - ceph
echo -e | ssh-keygen -t rsa -N ""
echo 'StrictHostKeyChecking no' > ~/.ssh/config
chmod 644 .ssh/config

Copy the ssh keys between each of the four servers.

sshpass -p ceph ssh-copy-id 10.10.0.100 -f
sshpass -p ceph ssh-copy-id 10.10.0.101 -f
sshpass -p ceph ssh-copy-id 10.10.0.102 -f
sshpass -p ceph ssh-copy-id 10.10.0.103 -f

Test

From each server, test to see if the ceph user can ssh into each of the other servers.

ssh ceph@10.10.10.100
ssh ceph@10.10.10.101
ssh ceph@10.10.10.102
ssh ceph@10.10.10.103

#or use the fqdn.  Good test to see if DNS is working.

ssh ceph@mon1.it.megocollector.com
ssh ceph@osd1.it.megocollector.com
ssh ceph@osd2.it.megocollector.com
ssh ceph@osd3.it.megocollector.com

So what about the hard drives? The /dev/sdb, /dev/sdc, and /dev/sdd drives will not be prepared in advance of ceph.  Ceph will handle all the preparation. Done.