3 HDD with 3 SSD Journal on single host. All Installation and setup worked as expected but encountered issue w.r.t PG being undersized+degraded+peered.
root@ceph8:/etc/ceph# ceph -w
cluster 4f62eb40-dcb8-4f38-b811-9bb440d5f054
health HEALTH_WARN
128 pgs degraded
128 pgs stuck inactive
128 pgs stuck unclean
128 pgs undersized
monmap e1: 1 mons at {ceph8=192.168.1.14:6789/0}
election epoch 1, quorum 0 ceph8
osdmap e87: 3 osds: 3 up, 3 in
pgmap v164: 128 pgs, 1 pools, 0 bytes data, 0 objects
110 MB used, 1396 GB / 1396 GB avail
128 undersized+degraded+peered
Hmm.. so it's undersized, degraded and having peering issue ...
root@ceph8:~# ceph osd pool get blockpool size
size: 3
root@ceph8:~# ceph osd pool get blockpool min_size
min_size: 2
Looks good, as far as replication count is concerned..
Let's check what's replication policy is set as far is peering is concerned.
root@ceph8:~# ceph osd dump --format=json-pretty |more
........
"pool_max": 4,
"max_osd": 3,
"pools": [
{
"pool": 4,
"pool_name": "blockpool",
"flags": 1,
"flags_names": "hashpspool",
"type": 1,
"size": 3,
"min_size": 2,
"crush_ruleset": 0, (blockpool crush_ruleset is set to 0
Let's check what cursh_ruleset 0 is all about in crushmap
root@ceph8:~# ceph osd crush dump --format=json-pretty|more
.......
"rules": [
{
"rule_id": 0,
"rule_name": "replicated_ruleset",
"ruleset": 0,
"type": 1,
"min_size": 1,
"max_size": 10,
"steps": [
{
"op": "take",
"item": -1,
"item_name": "default"
},
{
"op": "chooseleaf_firstn",
"num": 0,
"type": "host" (It's marked as host i.e peering required separate host)
},
To create replica on separate OSD contained within same host requires creating new ruleset
root@ceph8:~# ceph osd crush rule create-simple same-host default osd
Let's verify the new ruleset just created..
root@ceph8:~# ceph osd crush dump
......
{
"rule_id": 1,
"rule_name": "same-host",
"ruleset": 1,
"type": 1,
"min_size": 1,
"max_size": 10,
"steps": [
{
"op": "take",
"item": -1,
"item_name": "default"
},
{
"op": "choose_firstn",
"num": 0,
"type": "osd"
},
It's time to apply the ruleset to the specific pool
root@ceph8:~# ceph osd pool set blockpool crush_ruleset 1 (1 is the ruleset ID)
Time to check if ceph-cluster is healthy or not
root@ceph8:~# ceph -s
cluster 4f62eb40-dcb8-4f38-b811-9bb440d5f054
health HEALTH_OK
monmap e1: 1 mons at {ceph8=192.168.1.14:6789/0}
election epoch 1, quorum 0 ceph8
osdmap e93: 3 osds: 3 up, 3 in
pgmap v182: 128 pgs, 1 pools, 0 bytes data, 0 objects
113 MB used, 1396 GB / 1396 GB avail
128 active+clean
Great blockpool ready for test!!!