Discussion:
Does anyone else still experiancing memory issues with 12.2.2 and Bluestore?
Add Reply
Tzachi Strul
2018-02-10 10:44:40 UTC
Reply
Permalink
Raw Message
Hi,
I know that 12.2.2 should have fixed all memory leak issues with bluestore
but we still experiencing some odd behavior.

Our osd flaps once in a while... sometimes it doesn't stop until we restart
all osds on all nodes/on the same server...
in our syslog we see messages like this "failed: Cannot allocate memory" on
all kind of processes...

In addition, sometimes we get this error while trying to work with ceph
commands:
Traceback (most recent call last):
File "/usr/bin/ceph", line 125, in <module>
import rados
ImportError: libceph-common.so.0: cannot map zero-fill pages

seems like a memory leak issue...when we restart all osds this behavior
stops for few hours/days.
we have 8 osd servers with 16 ssd disks on each and 64GB of ram. bluestore
cache set to default (3GB for ssd)

the result is our cluster is almost constantly in rebuilds and that impacts
performance.

***@ecprdbcph10-opens:~# ceph daemon osd.1 dump_mempools
{
"bloom_filter": {
"items": 0,
"bytes": 0
},
"bluestore_alloc": {
"items": 5105472,
"bytes": 5105472
},
"bluestore_cache_data": {
"items": 68868,
"bytes": 1934663680
},
"bluestore_cache_onode": {
"items": 152640,
"bytes": 102574080
},
"bluestore_cache_other": {
"items": 16920009,
"bytes": 371200513
},
"bluestore_fsck": {
"items": 0,
"bytes": 0
},
"bluestore_txc": {
"items": 3,
"bytes": 2160
},
"bluestore_writing_deferred": {
"items": 33,
"bytes": 265015
},
"bluestore_writing": {
"items": 19,
"bytes": 6403820
},
"bluefs": {
"items": 303,
"bytes": 12760
},
"buffer_anon": {
"items": 32958,
"bytes": 14087657
},
"buffer_meta": {
"items": 68996,
"bytes": 6071648
},
"osd": {
"items": 187,
"bytes": 2255968
},
"osd_mapbl": {
"items": 0,
"bytes": 0
},
"osd_pglog": {
"items": 514238,
"bytes": 152438172
},
"osdmap": {
"items": 35699,
"bytes": 823040
},
"osdmap_mapping": {
"items": 0,
"bytes": 0
},
"pgmap": {
"items": 0,
"bytes": 0
},
"mds_co": {
"items": 0,
"bytes": 0
},
"unittest_1": {
"items": 0,
"bytes": 0
},
"unittest_2": {
"items": 0,
"bytes": 0
},
"total": {
"items": 22899425,
"bytes": 2595903985
}
}


Any help would be appreciated.
Thank you
--
*Tzachi Strul*

*Storage DevOps *// *Kenshoo*

*Office* +972 73 2862-368 // *Mobile* +972 54 755 1308

[image: Kenshoo logo] <http://kenshoo.com/>
--
This e-mail, as well as any attached document, may contain material which
is confidential and privileged and may include trademark, copyright and
other intellectual property rights that are proprietary to Kenshoo Ltd,
its subsidiaries or affiliates ("Kenshoo"). This e-mail and its
attachments may be read, copied and used only by the addressee for the
purpose(s) for which it was disclosed herein. If you have received it in
error, please destroy the message and any attachment, and contact us
immediately. If you are not the intended recipient, be aware that any
review, reliance, disclosure, copying, distribution or use of the contents
of this message without Kenshoo's express permission is strictly prohibited.
Marc Roos
2018-02-10 10:50:34 UTC
Reply
Permalink
Raw Message
How do you know if you have an issue?


-----Original Message-----
From: Tzachi Strul [mailto:***@kenshoo.com]
Sent: zaterdag 10 februari 2018 11:45
To: ceph-***@ceph.com
Subject: [ceph-users] Does anyone else still experiancing memory issues
with 12.2.2 and Bluestore?
Alexandre DERUMIER
2018-02-10 11:08:34 UTC
Reply
Permalink
Raw Message
Hi,

I still have my osd memory growing slowly.

Default config, with ssd osd,

start around 5Gb, and after 1-2 months, near 8gb.

(Maybe related to fragmentation ?)



USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
ceph 1718009 2.5 11.7 8542012 7725992 ? Ssl 2017 2463:28 /usr/bin/ceph-osd -f --cluster ceph --id 5 --setuser ceph --setgroup ceph




***@ceph4-1:~# ceph daemon osd.5 dump_mempools
{
"bloom_filter": {
"items": 0,
"bytes": 0
},
"bluestore_alloc": {
"items": 98449088,
"bytes": 98449088
},
"bluestore_cache_data": {
"items": 759,
"bytes": 17276928
},
"bluestore_cache_onode": {
"items": 884140,
"bytes": 594142080
},
"bluestore_cache_other": {
"items": 116375567,
"bytes": 2072801299
},
"bluestore_fsck": {
"items": 0,
"bytes": 0
},
"bluestore_txc": {
"items": 6,
"bytes": 4320
},
"bluestore_writing_deferred": {
"items": 99,
"bytes": 1190045
},
"bluestore_writing": {
"items": 11,
"bytes": 4510159
},
"bluefs": {
"items": 1202,
"bytes": 64136
},
"buffer_anon": {
"items": 76863,
"bytes": 21327234
},
"buffer_meta": {
"items": 910,
"bytes": 80080
},
"osd": {
"items": 328,
"bytes": 3956992
},
"osd_mapbl": {
"items": 0,
"bytes": 0
},
"osd_pglog": {
"items": 1118050,
"bytes": 286277600
},
"osdmap": {
"items": 6073,
"bytes": 551872
},
"osdmap_mapping": {
"items": 0,
"bytes": 0
},
"pgmap": {
"items": 0,
"bytes": 0
},
"mds_co": {
"items": 0,
"bytes": 0
},
"unittest_1": {
"items": 0,
"bytes": 0
},
"unittest_2": {
"items": 0,
"bytes": 0
},
"total": {
"items": 216913096,
"bytes": 3100631833
}
}


----- Mail original -----
De: "Tzachi Strul" <***@kenshoo.com>
À: "ceph-users" <ceph-***@ceph.com>
Envoyé: Samedi 10 Février 2018 11:44:40
Objet: [ceph-users] Does anyone else still experiancing memory issues with 12.2.2 and Bluestore?

Hi,
I know that 12.2.2 should have fixed all memory leak issues with bluestore but we still experiencing some odd behavior.

Our osd flaps once in a while... sometimes it doesn't stop until we restart all osds on all nodes/on the same server...
in our syslog we see messages like this "failed: Cannot allocate memory" on all kind of processes...

In addition, sometimes we get this error while trying to work with ceph commands:
Traceback (most recent call last):
File "/usr/bin/ceph", line 125, in <module>
import rados
ImportError: libceph-common.so.0: cannot map zero-fill pages

seems like a memory leak issue...when we restart all osds this behavior stops for few hours/days.
we have 8 osd servers with 16 ssd disks on each and 64GB of ram. bluestore cache set to default (3GB for ssd)

the result is our cluster is almost constantly in rebuilds and that impacts performance.

***@ecprdbcph10-opens:~# ceph daemon osd.1 dump_mempools
{
"bloom_filter": {
"items": 0,
"bytes": 0
},
"bluestore_alloc": {
"items": 5105472,
"bytes": 5105472
},
"bluestore_cache_data": {
"items": 68868,
"bytes": 1934663680
},
"bluestore_cache_onode": {
"items": 152640,
"bytes": 102574080
},
"bluestore_cache_other": {
"items": 16920009,
"bytes": 371200513
},
"bluestore_fsck": {
"items": 0,
"bytes": 0
},
"bluestore_txc": {
"items": 3,
"bytes": 2160
},
"bluestore_writing_deferred": {
"items": 33,
"bytes": 265015
},
"bluestore_writing": {
"items": 19,
"bytes": 6403820
},
"bluefs": {
"items": 303,
"bytes": 12760
},
"buffer_anon": {
"items": 32958,
"bytes": 14087657
},
"buffer_meta": {
"items": 68996,
"bytes": 6071648
},
"osd": {
"items": 187,
"bytes": 2255968
},
"osd_mapbl": {
"items": 0,
"bytes": 0
},
"osd_pglog": {
"items": 514238,
"bytes": 152438172
},
"osdmap": {
"items": 35699,
"bytes": 823040
},
"osdmap_mapping": {
"items": 0,
"bytes": 0
},
"pgmap": {
"items": 0,
"bytes": 0
},
"mds_co": {
"items": 0,
"bytes": 0
},
"unittest_1": {
"items": 0,
"bytes": 0
},
"unittest_2": {
"items": 0,
"bytes": 0
},
"total": {
"items": 22899425,
"bytes": 2595903985
}
}


Any help would be appreciated.
Thank you
--
Tzachi Strul

Storage DevOps // Kenshoo

Office +972 73 2862-368 // Mobile +972 54 755 1308

[ http://kenshoo.com/ ]

This e-mail, as well as any attached document, may contain material which is confidential and privileged and may include trademark, copyright and other intellectual property rights that are proprietary to Kenshoo Ltd, its subsidiaries or affiliates ("Kenshoo"). This e-mail and its attachments may be read, copied and used only by the addressee for the purpose(s) for which it was disclosed herein. If you have received it in error, please destroy the message and any attachment, and contact us immediately. If you are not the intended recipient, be aware that any review, reliance, disclosure, copying, distribution or use of the contents of this message without Kenshoo's express permission is strictly prohibited.
_______________________________________________
ceph-users mailing list
ceph-***@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Loading...