This blog covers setting up FIO as CEPH client. Provided you have your CEPH cluster is up and running from my previous blog.
FIO works on rbd hence we have to create rados block device.
Create a pool to hold your rados block device
root@ceph2:~# ceph osd pool create blockpool 64
root@ceph2:~# rados lspools
rbd
blockpool
Create a rados image on the pool that needs to map as rados block device
root@ceph2:~# rbd create block0 --size 102400 --pool blockpool
root@ceph2:~# rbd ls blockpool
block0
Map rbd image to the kernel module to create traditional block device as /dev/rbd<n>
root@ceph2:~# rbd map block0 --pool blockpool
Now we have rbd created, it's time to setup FIO. In my experiment I will run FIO on a separate node.
Copy ceph.client.admin.keyring from admin node or node on which ceph-mon is running.
Create /etc/ceph/ceph.conf with following contents
[global]
mon_host = <mon_ip_addr>
keyring = /etc/ceph/ceph.client.admin.keyring
Setup RBD engine configuration file examples/rbd.fio
[global]
ioengine=rbd
clientname=admin
pool=blockpool
rbdname=block0
;invalidate=0
rw=write
bs=4k
time_based
runtime=120
[rbd_iodepth32]
iodepth=32
Here you go...
root@ceph8:~/fio# ./fio examples/rbd.fio
rbd_iodepth32: (g=0): rw=write, bs=4K-4K/4K-4K/4K-4K, ioengine=rbd, iodepth=32
fio-2.2.9-20-g1520
Starting 1 process
rbd engine: RBD version: 0.1.9
Jobs: 1 (f=1): [W(1)] [100.0% done] [0KB/2560KB/0KB /s] [0/640/0 iops] [eta 00m:00s]
rbd_iodepth32: (groupid=0, jobs=1): err= 0: pid=9535: Wed Aug 19 11:28:08 2015
write: io=320384KB, bw=2669.6KB/s, iops=667, runt=120013msec
slat (usec): min=38, max=348, avg=91.49, stdev=31.01
clat (msec): min=19, max=202, avg=46.30, stdev=26.70
lat (msec): min=20, max=202, avg=46.39, stdev=26.70
clat percentiles (msec):
| 1.00th=[ 24], 5.00th=[ 25], 10.00th=[ 26], 20.00th=[ 29],
| 30.00th=[ 32], 40.00th=[ 34], 50.00th=[ 37], 60.00th=[ 41],
| 70.00th=[ 48], 80.00th=[ 58], 90.00th=[ 87], 95.00th=[ 102],
| 99.00th=[ 145], 99.50th=[ 161], 99.90th=[ 196], 99.95th=[ 200],| 99.99th=[ 202]
bw (KB /s): min= 1485, max= 3801, per=100.00%, avg=2673.87, stdev=541.11
lat (msec) : 20=0.01%, 50=74.29%, 100=20.23%, 250=5.48%
cpu : usr=5.29%, sys=1.11%, ctx=7179, majf=0, minf=16582
IO depths : 1=3.1%, 2=6.2%, 4=12.5%, 8=25.0%, 16=50.0%, 32=3.1%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=97.0%, 8=0.0%, 16=0.0%, 32=3.0%, 64=0.0%, >=64=0.0%
issued : total=r=0/w=80096/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
latency : target=0, window=0, percentile=100.00%, depth=32
Run status group 0 (all jobs):
WRITE: io=320384KB, aggrb=2669KB/s, minb=2669KB/s, maxb=2669KB/s, mint=120013msec, maxt=120013msec
Disk stats (read/write):
sda: ios=0/43, merge=0/15, ticks=0/128, in_queue=128, util=0.11%
FIO works on rbd hence we have to create rados block device.
Create a pool to hold your rados block device
root@ceph2:~# ceph osd pool create blockpool 64
root@ceph2:~# rados lspools
rbd
blockpool
Create a rados image on the pool that needs to map as rados block device
root@ceph2:~# rbd create block0 --size 102400 --pool blockpool
root@ceph2:~# rbd ls blockpool
block0
Map rbd image to the kernel module to create traditional block device as /dev/rbd<n>
root@ceph2:~# rbd map block0 --pool blockpool
Now we have rbd created, it's time to setup FIO. In my experiment I will run FIO on a separate node.
Copy ceph.client.admin.keyring from admin node or node on which ceph-mon is running.
Create /etc/ceph/ceph.conf with following contents
[global]
mon_host = <mon_ip_addr>
keyring = /etc/ceph/ceph.client.admin.keyring
Setup RBD engine configuration file examples/rbd.fio
[global]
ioengine=rbd
clientname=admin
pool=blockpool
rbdname=block0
;invalidate=0
rw=write
bs=4k
time_based
runtime=120
[rbd_iodepth32]
iodepth=32
Here you go...
root@ceph8:~/fio# ./fio examples/rbd.fio
rbd_iodepth32: (g=0): rw=write, bs=4K-4K/4K-4K/4K-4K, ioengine=rbd, iodepth=32
fio-2.2.9-20-g1520
Starting 1 process
rbd engine: RBD version: 0.1.9
Jobs: 1 (f=1): [W(1)] [100.0% done] [0KB/2560KB/0KB /s] [0/640/0 iops] [eta 00m:00s]
rbd_iodepth32: (groupid=0, jobs=1): err= 0: pid=9535: Wed Aug 19 11:28:08 2015
write: io=320384KB, bw=2669.6KB/s, iops=667, runt=120013msec
slat (usec): min=38, max=348, avg=91.49, stdev=31.01
clat (msec): min=19, max=202, avg=46.30, stdev=26.70
lat (msec): min=20, max=202, avg=46.39, stdev=26.70
clat percentiles (msec):
| 1.00th=[ 24], 5.00th=[ 25], 10.00th=[ 26], 20.00th=[ 29],
| 30.00th=[ 32], 40.00th=[ 34], 50.00th=[ 37], 60.00th=[ 41],
| 70.00th=[ 48], 80.00th=[ 58], 90.00th=[ 87], 95.00th=[ 102],
| 99.00th=[ 145], 99.50th=[ 161], 99.90th=[ 196], 99.95th=[ 200],| 99.99th=[ 202]
bw (KB /s): min= 1485, max= 3801, per=100.00%, avg=2673.87, stdev=541.11
lat (msec) : 20=0.01%, 50=74.29%, 100=20.23%, 250=5.48%
cpu : usr=5.29%, sys=1.11%, ctx=7179, majf=0, minf=16582
IO depths : 1=3.1%, 2=6.2%, 4=12.5%, 8=25.0%, 16=50.0%, 32=3.1%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=97.0%, 8=0.0%, 16=0.0%, 32=3.0%, 64=0.0%, >=64=0.0%
issued : total=r=0/w=80096/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
latency : target=0, window=0, percentile=100.00%, depth=32
Run status group 0 (all jobs):
WRITE: io=320384KB, aggrb=2669KB/s, minb=2669KB/s, maxb=2669KB/s, mint=120013msec, maxt=120013msec
Disk stats (read/write):
sda: ios=0/43, merge=0/15, ticks=0/128, in_queue=128, util=0.11%