Joao Eduardo Luis
2018-11-08 11:30:06 UTC
Hello Gesiel,
Welcome to Ceph!
In the future, you may want to address the ceph-users list
may be waiting for those to come back up before they finish being
cleaned up.
-Joao
Welcome to Ceph!
In the future, you may want to address the ceph-users list
Hi everyone,
I am a beginner in Ceph. I made a increase of pg_num in a pool, and
after the cluster rebalance I increased pgp_num (a confission: I not
had read the complete documentation about this operation :-( ). Then
after this my cluster broken, and stoped all. The cluster not rebalance,
and my impression is that are all stopped.
Below is my "ceph -s". Can anyone help-me?
You have two osds down. Depending on how your data is mapped, your pgsI am a beginner in Ceph. I made a increase of pg_num in a pool, and
after the cluster rebalance I increased pgp_num (a confission: I not
had read the complete documentation about this operation :-( ). Then
after this my cluster broken, and stoped all. The cluster not rebalance,
and my impression is that are all stopped.
Below is my "ceph -s". Can anyone help-me?
may be waiting for those to come back up before they finish being
cleaned up.
-Joao
+++++++
id: ab5dcb0c-480d-419c-bcb8-013cbcce5c4d
health: HEALTH_WARN
14402/995493 objects misplaced (1.447%)
Reduced data availability: 348 pgs inactive, 313 pgs peering
mon: 3 daemons, quorum cmonitor,thanos,cmonitor2
mgr: thanos(active), standbys: cmonitor
osd: 19 osds: 17 up, 17 in; 221 remapped pgs
pools: 1 pools, 1024 pgs
objects: 331.8 k objects, 1.3 TiB
usage: 3.8 TiB used, 7.4 TiB / 11 TiB avail
pgs: 1.660% pgs unknown
32.324% pgs not active
14402/995493 objects misplaced (1.447%)
676 active+clean
186 remapped+peering
127 peering
18 activating+remapped
17 unknown
At
Gesiel
_______________________________________________
Ceph-community mailing list
http://lists.ceph.com/listinfo.cgi/ceph-community-ceph.com
id: ab5dcb0c-480d-419c-bcb8-013cbcce5c4d
health: HEALTH_WARN
14402/995493 objects misplaced (1.447%)
Reduced data availability: 348 pgs inactive, 313 pgs peering
mon: 3 daemons, quorum cmonitor,thanos,cmonitor2
mgr: thanos(active), standbys: cmonitor
osd: 19 osds: 17 up, 17 in; 221 remapped pgs
pools: 1 pools, 1024 pgs
objects: 331.8 k objects, 1.3 TiB
usage: 3.8 TiB used, 7.4 TiB / 11 TiB avail
pgs: 1.660% pgs unknown
32.324% pgs not active
14402/995493 objects misplaced (1.447%)
676 active+clean
186 remapped+peering
127 peering
18 activating+remapped
17 unknown
At
Gesiel
_______________________________________________
Ceph-community mailing list
http://lists.ceph.com/listinfo.cgi/ceph-community-ceph.com