Discussion:
[ceph-users] Ceph pure ssd strange performance.
Darius Kasparavičius
2018-11-20 09:30:14 UTC
Permalink
Hello,


I'm running some tests on pure SSD pool with mimic and bluestore.
Strange thing is that currently running fio into rbd images I'm
getting a huge difference client and disk I/O. For pure write
performance I'm seeing about ~20k iops on the client side and about
~300k on the ssd side. I have used lvm and no db/wal partitions.
Should I try to create the osd with a separate db partition?

I have run a hdd ceph cluster with bluestore and nvme db partitions
and there aren't such differences in io.

Regards,

Darius
Darius Kasparavičius
2018-11-20 11:47:21 UTC
Permalink
Update.

So I rebuilt the osd with a separate DB partition on the ssd drive and
i/o to disks is what I expected, about ~3x the client I/O.
Post by Darius Kasparavičius
Hello,
I'm running some tests on pure SSD pool with mimic and bluestore.
Strange thing is that currently running fio into rbd images I'm
getting a huge difference client and disk I/O. For pure write
performance I'm seeing about ~20k iops on the client side and about
~300k on the ssd side. I have used lvm and no db/wal partitions.
Should I try to create the osd with a separate db partition?
I have run a hdd ceph cluster with bluestore and nvme db partitions
and there aren't such differences in io.
Regards,
Darius
Loading...