Discussion:
[ceph-users] Pool Available Capacity Question
Jay Munsterman
2018-12-07 20:55:25 UTC
Permalink
Hey all,
I hope this is a simple question, but I haven't been able to figure it out.
On one of our clusters there seems to be a disparity between the global
available space and the space available to pools.

$ ceph df
GLOBAL:
SIZE AVAIL RAW USED %RAW USED
1528T 505T 1022T 66.94
POOLS:
NAME ID USED %USED MAX AVAIL OBJECTS
fs_data 7 678T 85.79 112T 194937779
fs_metadata 8 62247k 0 57495G 92973
libvirt_pool 14 495G 0.57 86243G 127313

The global available space is 505T, the primary pool (fs_data, erasure code
k=2, m=1) lists 112T available. With 2,1 I would expect there to be ~338T
available (505 x .67). Seems we have a few hundred TB missing. Thoughts?
Thanks,
jay
Stefan Kooman
2018-12-08 13:35:22 UTC
Permalink
Post by Jay Munsterman
Hey all,
I hope this is a simple question, but I haven't been able to figure it out.
On one of our clusters there seems to be a disparity between the global
available space and the space available to pools.
$ ceph df
SIZE AVAIL RAW USED %RAW USED
1528T 505T 1022T 66.94
NAME ID USED %USED MAX AVAIL OBJECTS
fs_data 7 678T 85.79 112T 194937779
fs_metadata 8 62247k 0 57495G 92973
libvirt_pool 14 495G 0.57 86243G 127313
The global available space is 505T, the primary pool (fs_data, erasure code
k=2, m=1) lists 112T available. With 2,1 I would expect there to be ~338T
available (505 x .67). Seems we have a few hundred TB missing.
Thoughts?
Thanks,
jay
Your OSDs are imbalanced. Ceph reports disk usage of the OSD most full. I suggest you check this presentation by Dan van der Ster: https://www.slideshare.net/mobile/Inktank_Ceph/ceph-day-berlin-mastering-ceph-operations-upmap-and-the-mgr-balancer

If you are running Ceph Luminous with Luminous only clients: enable upmap for balancing and enable balancer module.

Gr. Stefan

Hi,
Jay Munsterman
2018-12-09 16:12:51 UTC
Permalink
Thanks, great presentation and that explains it. And some interesting ideas
on using upmap in different ways.

Our cluster is Luminous. Does anyone know the mapping of ceph client
version to CentOS kernel? It looks like Redhat has a knowledge base article
on the subject available to customers. Running "ceph features" in our
environment would indicate a number of clients on Jewel. I am guessing that
is the standard for the CentOS 7.x kernel.

jay
Post by Stefan Kooman
Post by Jay Munsterman
Hey all,
I hope this is a simple question, but I haven't been able to figure it out.
On one of our clusters there seems to be a disparity between the global
available space and the space available to pools.
$ ceph df
SIZE AVAIL RAW USED %RAW USED
1528T 505T 1022T 66.94
NAME ID USED %USED MAX AVAIL OBJECTS
fs_data 7 678T 85.79 112T 194937779
fs_metadata 8 62247k 0 57495G 92973
libvirt_pool 14 495G 0.57 86243G 127313
The global available space is 505T, the primary pool (fs_data, erasure code
k=2, m=1) lists 112T available. With 2,1 I would expect there to be ~338T
available (505 x .67). Seems we have a few hundred TB missing. Thoughts?
Thanks,
jay
Your OSDs are imbalanced. Ceph reports disk usage of the OSD most full. I
https://www.slideshare.net/mobile/Inktank_Ceph/ceph-day-berlin-mastering-ceph-operations-upmap-and-the-mgr-balancer
If you are running Ceph Luminous with Luminous only clients: enable upmap
for balancing and enable balancer module.
Gr. Stefan
Hi,
Konstantin Shalygin
2018-12-10 04:43:23 UTC
Permalink
Post by Jay Munsterman
Our cluster is Luminous. Does anyone know the mapping of ceph client
version to CentOS kernel? It looks like Redhat has a knowledge base article
on the subject available to customers. Running "ceph features" in our
environment would indicate a number of clients on Jewel. I am guessing that
is the standard for the CentOS 7.x kernel.
CentOS 7.5 kernel supported upmap feature.



k
Jay Munsterman
2018-12-10 18:04:41 UTC
Permalink
Thanks, Konstantin. Just verified in the lab that Centos 7.6 clients can
access the cephfs with upmap enabled.
Post by Jay Munsterman
Our cluster is Luminous. Does anyone know the mapping of ceph client
version to CentOS kernel? It looks like Redhat has a knowledge base article
on the subject available to customers. Running "ceph features" in our
environment would indicate a number of clients on Jewel. I am guessing that
is the standard for the CentOS 7.x kernel.
CentOS 7.5 kernel supported upmap feature.
k
Paul Emmerich
2018-12-09 19:05:22 UTC
Permalink
Thanks, great presentation and that explains it. And some interesting ideas on using upmap in different ways.
Our cluster is Luminous. Does anyone know the mapping of ceph client version to CentOS kernel? It looks like Redhat has a knowledge base article on the subject available to customers. Running "ceph features" in our environment would indicate a number of clients on Jewel. I am guessing that is the standard for the CentOS 7.x kernel.
Probably, yes. Kernels >= 4.13 should support it but CentOS kernels
often have lots of backports so it might work with an earlier kernel.
--
Paul Emmerich

Looking for help with your Ceph cluster? Contact us at https://croit.io

croit GmbH
Freseniusstr. 31h
81247 München
www.croit.io
Tel: +49 89 1896585 90
jay
Post by Stefan Kooman
Post by Jay Munsterman
Hey all,
I hope this is a simple question, but I haven't been able to figure it out.
On one of our clusters there seems to be a disparity between the global
available space and the space available to pools.
$ ceph df
SIZE AVAIL RAW USED %RAW USED
1528T 505T 1022T 66.94
NAME ID USED %USED MAX AVAIL OBJECTS
fs_data 7 678T 85.79 112T 194937779
fs_metadata 8 62247k 0 57495G 92973
libvirt_pool 14 495G 0.57 86243G 127313
The global available space is 505T, the primary pool (fs_data, erasure code
k=2, m=1) lists 112T available. With 2,1 I would expect there to be ~338T
available (505 x .67). Seems we have a few hundred TB missing. Thoughts?
Thanks,
jay
Your OSDs are imbalanced. Ceph reports disk usage of the OSD most full. I suggest you check this presentation by Dan van der Ster: https://www.slideshare.net/mobile/Inktank_Ceph/ceph-day-berlin-mastering-ceph-operations-upmap-and-the-mgr-balancer
If you are running Ceph Luminous with Luminous only clients: enable upmap for balancing and enable balancer module.
Gr. Stefan
Hi,
_______________________________________________
ceph-users mailing list
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Loading...