Single Node Ceph Install
A
quick guide for installing Ceph on a single node for demo purposes. It almost
goes without saying that this is for tire-kickers who just want to test out the
software. Ceph is a powerful distributed storage platform with a focus on
spreading the failure domain across disks, servers, racks, pods, and
datacenters. It doesn’t get a chance to shine if limited to a single node. With
that said, let’s get on with it.
Inspired from: http://palmerville.github.io/2016/04/30/single-node-ceph-install.html
Hardware
This example uses
a VMware Workstation 11 VM with 4 disks attached (1 for OS/App, 3 for Storage).
Those installing on physical hardware for a more permanent home setup will
obviously want to increase the OS disks for redundancy.
To get started
create a new VM with the following specs:
·
Name: ceph-single-node
·
Type: Linux
·
Version: Ubuntu 16.04.03 (64-bit)
·
Memory: 4GB
·
Disk: 25GB (Dynamic)
·
Network Interface1: NAT
·
Network Interface2: Bridge (For direct access to outer network)
Notice the name of
the VM is “server”. VMWare will inject this as the hostname during the Ubuntu
install. You can call yours whatever you want but for copy/paste ease of
install via the commands below, I highly recommend keeping it the same.
Linux Install
For the OS install
we are going to use Ubuntu Server 16.04. The default install is fine
including the default partitioning. The only thing you’ll want to change from
default is to select the optional OpenSSH Server.
Once the Linux
installation is complete, shut the VM off to add the data disks. You need to
add 3 separate 1TB (1024 GB) drives. Since these are dynamically allocated and
we won’t be actually putting a bunch of data on here you don’t need 3TB free on
your host machine. Please don’t try to create smaller drives for the demo as
you’ll run into a situation where Ceph will refuse to write data as the drives
will be below the default free space threshold. For your sanity just make them
1TB.
Ceph Install:-
Note: Go for root user will executing the below commands.
For reference, if anything goes south the complete Ceph install documentation is here. Most of the below is identical to the official documentation. Where necessary I’ve adjusted it so it will work on a single node and made decisions on options to lower the learning curve.
For reference, if anything goes south the complete Ceph install documentation is here. Most of the below is identical to the official documentation. Where necessary I’ve adjusted it so it will work on a single node and made decisions on options to lower the learning curve.
Begin by
installing the ceph repo key.
wget -q -O-
'https://download.ceph.com/keys/release.asc' | sudo apt-key add -
Add the Ceph (jewel release)
repo to your Ubuntu sources list.
echo deb http://download.ceph.com/debian-jewel/ xenial main | sudo tee /etc/apt/sources.list.d/ceph.list
Install
the ceph-deploy utility, this is the admin tool that allows you to centrally
manage and create new ceph nodes. For those that have been around ceph for a
while this is a VERY welcome upgrade to the old ‘here’s a list of things you’ll
need to install and configure on each node, how that happens is up to you…
might want to learn Chef/Puppet…’
Thankfully those
days are behind us and now we have the awesome ceph-deploy.
apt-get
update && sudo apt-get install ceph-deploy
We’ll want a dedicated user
to handle ceph configs and installs so lets create that user now.
useradd -m
-s /bin/bash ceph-deploy
passwd
ceph-deploy
This user needs passwordless
sudo configured.
echo "ceph-deploy ALL = (root) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/ceph-deploy
Verify permissions are
correct on this file.
chmod 0440 /etc/sudoers.d/ceph-deploy
Now let’s switch to this
newly created user. All the rest of guide will be commands issued as this user.
su - ceph-deploy
The ceph-deploy utility
functions by ssh’ing to other nodes and executing commands. To accomplish this
we need to create an RSA key pair to allow passwordless logins to the nodes we
will be configuring (in this guide we are of course just talking about the
local node we are on). Make sure you are still the ceph-deploy user and use
ssh-keygen to generate the key pair. Just hit enter at all the prompts.
Defaults are fine.
ssh-keygen
Now lets install the
generated public key on our destination nodes (in this case our only node,
which happens to be the same box we are currently logged into).
ssh-copy-id ceph-deploy@server
Make a new subdirectory in
the ceph-deploy users’s home directory and change to it.
cd ~
mkdir my-cluster
cd my-cluster
Edit
the /etc/hosts file to add mappings to the cluster nodes. Example:
$ cat /etc/hosts
127.0.0.1 localhost
192.168.43.132 server
Install dnsmasq to resolve server name by using the above
hosts file.
apt-get install dnsmasq
Setup
an SSH access configuration by editing the .ssh/config file. Example:
Host server
Hostname server
User ceph-deploy
Create an initial cluster
config in this directory.
ceph-deploy new server
This
created a bunch of files in the current directory. One of which is the global
config file. Edit this newly created initial configuration file.
Nano ceph.conf
Add the following
two lines:
osd pool default size = 2
osd crush chooseleaf type = 0
Default
pool size is how many replicas of our data we want (2). The chooseleaf setting
is required to tell ceph we are only a single node and that it’s OK to store
the same copy of data on the same physical node. Normally for safety, ceph
distributes the copies and won’t leave all your eggs in the same basket
(server).
Time to install ceph.
This installs the ceph binaries and copies our initial config file.
ceph-deploy install server
Before we can create storage
OSDs we need to create a monitor.
ceph-deploy mon create-initial
Clear up the disks to remove all the pre-existing data and
partitioning tables.
ceph-disk zap /dev/sdb
ceph-disk zap /dev/sdc
ceph-disk zap /dev/sdd
Now we can create the OSDs
that will hold our data. Remember those 3 x 1TB drives we attached earlier,
they should be /dev/sdb, /dev/sdc, /dev/sdd. Let’s configure them.
ceph-deploy osd prepare server:sdb
ceph-deploy osd prepare server:sdc
ceph-deploy osd prepare server:sdd
And activate them.
ceph-deploy osd activate server:/dev/sdb1
ceph-deploy osd activate server:/dev/sdc1
ceph-deploy osd activate server:/dev/sdd1
Restribute our config and
keys.
ceph-deploy admin server
Depending on umask you may
not be able to read one of the created files with a non-root user. Let’s
correct.
sudo chmod +r /etc/ceph/ceph.client.admin.keyring
At this point your “cluster”
should be in working order and completely functional. Check health with:
ceph -s
The
most important line in that output is the second line from the top:
health HEALTH_OK
This
tells us that the cluster is happy everything is working as expected.
We’re not gonna
stop there with just an installed cluster. We want the rest of the ceph
functionality such as s3/swift object storage and cephfs.
Install
object storage gateway:
ceph-deploy rgw create server
Install cephfs:
ceph-deploy mds create server
That’s it! You now
have a fully functional ceph “cluster” with 1-monitor, 3-OSDs,
1-metadata-server, 1-rados-gateway.
Usage
Now
that we have this installed how do we use it?
Ceph FS
One of
the most followed features is the distributed filesystem provided by ceph fs.
Before
we can create a filesystem we need to create an osd pool to store it on.
ceph osd
pool create cephfs_data 128
ceph osd
pool create cephfs_metadata 128
Now
create the filesystem.
ceph fs new
cephfs cephfs_metadata cephfs_data
In
order to mount it in Linux we need to install the ceph client libraries.
sudo apt-get
install ceph-fs-common
Next we
need to create a mountpoint for the filesystem.
sudo mkdir
/mnt/mycephfs
By
default all access operations require authentication. The ceph install has
created some default credentials for us. To view them:
cat
~/my-cluster/ceph.client.admin.keyring
[client.admin]
key =
AQCv2yRXOVlUMxAAK+e6gehnirXTV0O8PrJYQQ==
The key
string is what we are looking for and will use it to mount this newly created
filesystem.
sudo mount
-t ceph ceph-single-node:6789:/ /mnt/mycephfs -o
name=admin,secret=AQCv2yRXOVlUMxAAK+e6gehnirXTV0O8PrJYQQ==
Now
that it’s mounted lets see what it looks like.
Good Blog.It is very so much valuable content.
ReplyDeleteDevOps Training
DevOps Online Training