Create a mount point and reserve small amount of RAM as filesystem. In my case I reserved 4G
mkdir /tmp/ramdisk
chmod 777 /tmp/ramdisk
mount -t tmpfs -o size=4G tmpfs /tmp/ramdisk/
In order to make CEPH aware of tmpfs journal area, it requires modification to the ceph configuration file
usually resides in /etc/ceph.conf directory
Modified ceph.conf to incorporate the tmpfs as journal. Add a section osd.0 if missing
[osd.0]
journal dio = false
osd journal size = 1000
osd journal = /tmp/ramdisk/osd.0.journal (0 represent osd.0)
Stop the ceph-osd daemon if running.
stop ceph-osd-all
Flush the existing journal data.
ceph-osd --flush-journal -i 0
Create a new journal using mkjournal, the command will read ceph.conf do the appropriate configuration changes.
ceph-osd --mkjournal -i 0
Start the ceph-osd daemon again.
start ceph-osd-all
using tmpfs on a single node cluster with replication factor 1
rados bench -p blockpool 60 write
Total time run: 64.473969
Total writes made: 740
Write size: 4194304
Bandwidth (MB/sec): 45.910
Stddev Bandwidth: 76.0158
Max bandwidth (MB/sec): 276
Min bandwidth (MB/sec): 0
Average Latency: 1.3925
Stddev Latency: 2.79704
Max latency: 14.1628
Min latency: 0.0586392
using hdd as journal
rados bench -p blockpool 60 write
Total time run: 60.618429
Total writes made: 579
Write size: 4194304
Bandwidth (MB/sec): 38.206
Stddev Bandwidth: 28.8425
Max bandwidth (MB/sec): 76
Min bandwidth (MB/sec): 0
Average Latency: 1.67496
Stddev Latency: 1.26224
Max latency: 6.45093
Min latency: 0.138346
Not impressive, still need to do bit more investigation.