CEPH is an unified distributed object store. If you don't plan to work with source code, you can ignore the below instructions as deploying ceph with release build is well documented.
This blog covers steps to install single node CEPH from the source code.
Download the ceph-install script from https://github.com/mohankri/goceph
The script download the ceph, install dependencies and modify ceph startup script. There could be shortcoming in the script for which ceph script can be modified easily as per once need.
make_version.sh -n ceph_ver.h (To mark it as private build)
On successful run it will download specific version from the ceph.com (hammer release currently, modify the script accordingly).
If debugging is required please make sure turn off the compiler optimization.
CXXFLAGS="-ggdb -fno-omit-frame-pointer" CFLAGS=" -ggdb -fno-omit-frame-pointer" ./configure
The script will also download all the dependencies, build ceph and update the ceph startup script as well.
Before deploying ceph, let's collect some benchmark number
Disk Benchmark
root@ceph8:~# dd if=/dev/zero of=/dev/sdb bs=1G count=1 oflag=direct
1+0 records in 1+0 records out 1073741824 bytes (1.1 GB) copied, 8.52481 s, 126 MB/s
Network Benchmark
root@ceph8:~# iperf -s
------------------------------------------------------------
Server listening on TCP port 5001 TCP window size: 85.3 KByte (default)
------------------------------------------------------------
[ 4] local 192.168.0.168 port 5001 connected with 192.168.0.139 port 42705 [ ID]
Interval Transfer Bandwidth [ 4] 0.0-10.0 sec 1.04 GBytes 895 Mbits/sec
root@ceph1:~# iperf -c ceph8 -i1 -t 10
------------------------------------------------------------
Client connecting to ceph8, TCP port 5001 TCP window size: 22.9 KByte (default)
------------------------------------------------------------
[ 3] local 192.168.0.139 port 42705 connected with 192.168.0.168 port 5001 [ ID]
Interval Transfer Bandwidth [ 3] 0.0- 1.0 sec 108 MBytes 905 Mbits/sec
root@ceph1:~/goceph# mkdir -p /etc/ceph
root@ceph1:/etc/ceph# ceph-deploy new ceph1
root@ceph1:/etc/ceph# ceph-deploy mon create-initial (run twice in case of failure, if required if it still fails need further debugging)
root@ceph1:/etc/ceph# ceph-deploy disk zap ceph1:sdb
root@ceph1:/etc/ceph# ceph-deploy osd create ceph1:sdb
root@ceph1:/var/run/ceph# sudo mkdir -p /var/lib/ceph/osd/ceph-0
root@ceph1:/etc/ceph# ceph-osd -i 0
starting osd.0 at :/0 osd_data /var/lib/ceph/osd/ceph-0 /var/lib/ceph/osd/ceph-0/journal
root@ceph1:/etc/ceph# ceph osd tree
ID WEIGHT TYPE NAME UP/DOWN REWEIGHT PRIMARY-AFFINITY
-1 0 root default
0 0 osd.0 up 1.00000 1.00000
To move osd.0 under the host require modifying crushmap
root@ceph1: ceph osd crush add-bucket ceph1 host
root@ceph1: ceph osd crush move ceph1 root=default
root@ceph1:ceph osd crush add osd.0 1.0 host=ceph1
root@ceph1:~# ceph osd tree
ID WEIGHT TYPE NAME UP/DOWN REWEIGHT PRIMARY-AFFINITY
-1 1.00000 root default
-2 1.00000 host ceph1
0 1.00000 osd.0 up 1.00000 1.00000
root@ceph1:/etc/ceph# ceph osd pool create blockpool 64
root@ceph1:/etc/ceph# ceph osd pool set blockpool size 1
set pool 1 size to 1
root@ceph1:~/goceph# ceph osd pool set rbd min_size 1
create a block device of 100GB in our pool
root@ceph1:/etc/ceph# rbd create block0 --size 102400 --pool blockpool
To stop mon and osd
sudo stop ceph-mon-all
sudo stop ceph-osd-all
You are all set here with single node.
/* Extra Notes */
sudo stop ceph-mon id=ceph1
stop ceph-osd id=1
/etc/init.d/ceph stop osd
if ceph-osd fail to comeup
ceph-deploy osd prepare ceph1:sdb
ceph-deploy osd activate ceph1:sdb1
Courtesy: Mellanox
This blog covers steps to install single node CEPH from the source code.
Download the ceph-install script from https://github.com/mohankri/goceph
The script download the ceph, install dependencies and modify ceph startup script. There could be shortcoming in the script for which ceph script can be modified easily as per once need.
make_version.sh -n ceph_ver.h (To mark it as private build)
On successful run it will download specific version from the ceph.com (hammer release currently, modify the script accordingly).
If debugging is required please make sure turn off the compiler optimization.
CXXFLAGS="-ggdb -fno-omit-frame-pointer" CFLAGS=" -ggdb -fno-omit-frame-pointer" ./configure
The script will also download all the dependencies, build ceph and update the ceph startup script as well.
Before deploying ceph, let's collect some benchmark number
Disk Benchmark
root@ceph8:~# dd if=/dev/zero of=/dev/sdb bs=1G count=1 oflag=direct
1+0 records in 1+0 records out 1073741824 bytes (1.1 GB) copied, 8.52481 s, 126 MB/s
Network Benchmark
root@ceph8:~# iperf -s
------------------------------------------------------------
Server listening on TCP port 5001 TCP window size: 85.3 KByte (default)
------------------------------------------------------------
[ 4] local 192.168.0.168 port 5001 connected with 192.168.0.139 port 42705 [ ID]
Interval Transfer Bandwidth [ 4] 0.0-10.0 sec 1.04 GBytes 895 Mbits/sec
root@ceph1:~# iperf -c ceph8 -i1 -t 10
------------------------------------------------------------
Client connecting to ceph8, TCP port 5001 TCP window size: 22.9 KByte (default)
------------------------------------------------------------
[ 3] local 192.168.0.139 port 42705 connected with 192.168.0.168 port 5001 [ ID]
Interval Transfer Bandwidth [ 3] 0.0- 1.0 sec 108 MBytes 905 Mbits/sec
root@ceph1:~/goceph# mkdir -p /etc/ceph
root@ceph1:/etc/ceph# ceph-deploy new ceph1
root@ceph1:/etc/ceph# ceph-deploy mon create-initial (run twice in case of failure, if required if it still fails need further debugging)
root@ceph1:/etc/ceph# ceph-deploy disk zap ceph1:sdb
root@ceph1:/etc/ceph# ceph-deploy osd create ceph1:sdb
root@ceph1:/var/run/ceph# sudo mkdir -p /var/lib/ceph/osd/ceph-0
root@ceph1:/etc/ceph# ceph-osd -i 0
starting osd.0 at :/0 osd_data /var/lib/ceph/osd/ceph-0 /var/lib/ceph/osd/ceph-0/journal
root@ceph1:/etc/ceph# ceph osd tree
ID WEIGHT TYPE NAME UP/DOWN REWEIGHT PRIMARY-AFFINITY
-1 0 root default
0 0 osd.0 up 1.00000 1.00000
To move osd.0 under the host require modifying crushmap
root@ceph1: ceph osd crush add-bucket ceph1 host
root@ceph1: ceph osd crush move ceph1 root=default
root@ceph1:ceph osd crush add osd.0 1.0 host=ceph1
root@ceph1:~# ceph osd tree
ID WEIGHT TYPE NAME UP/DOWN REWEIGHT PRIMARY-AFFINITY
-1 1.00000 root default
-2 1.00000 host ceph1
0 1.00000 osd.0 up 1.00000 1.00000
root@ceph1:/etc/ceph# ceph osd pool create blockpool 64
root@ceph1:/etc/ceph# ceph osd pool set blockpool size 1
set pool 1 size to 1
root@ceph1:~/goceph# ceph osd pool set rbd min_size 1
create a block device of 100GB in our pool
root@ceph1:/etc/ceph# rbd create block0 --size 102400 --pool blockpool
To stop mon and osd
sudo stop ceph-mon-all
sudo stop ceph-osd-all
You are all set here with single node.
/* Extra Notes */
sudo stop ceph-mon id=ceph1
stop ceph-osd id=1
/etc/init.d/ceph stop osd
if ceph-osd fail to comeup
ceph-deploy osd prepare ceph1:sdb
ceph-deploy osd activate ceph1:sdb1
Courtesy: Mellanox