Discussion:
[ceph-users] RGW performance with lots of objects
Robert Stanford
2018-11-27 18:10:02 UTC
Permalink
In the old days when I first installed Ceph with RGW the performance would
be very slow after storing 500+ million objects in my buckets. With
Luminous and index sharding is this still a problem or is this an old
problem that has been solved?

Regards
R
Mark Nelson
2018-11-27 18:28:10 UTC
Permalink
Hi Robert,


Solved is probably a strong word.  I'd say that things have improved. 
Bluestore in general tends to handle large numbers of objects better
than filestore does for several reasons including that it doesn't suffer
from pg directory splitting (though RocksDB compaction can become a
bottleneck with very large DBs and heavy metadata traffic)  Bluestore
also has less overhead for OMAP operations and so far we've generally
seen higher OMAP performance (ie how bucket indexes are currently
stored).  The bucket index sharding of course helps too.  One counter
argument is that bluestore uses the KeyvalueDB a lot more aggressively
than filestore does and that could have an impact on bucket indexes
hosted on the same OSDs as user objects.  This gets sort of complicated
though and may primarily be an issue if all of your OSDs are backed by
NVMe and sustaining very high write traffic. Ultimately I suspect that
if you ran the same 500+m object single-bucket test, that a modern
bluestore deployment would probably be faster than what you saw
pre-luminous with filestore. Whether or not it's acceptable is a
different question.  For example I've noticed in past tests that delete
performance improved dramatically when objects were spread across a
higher number of buckets.  Probably the best course of action will be to
run tests and diagnose the behavior to see if it's going to meet your needs.


Thanks,

Mark
Post by Robert Stanford
In the old days when I first installed Ceph with RGW the performance
would be very slow after storing 500+ million objects in my buckets.
With Luminous and index sharding is this still a problem or is this an
old problem that has been solved?
Regards
R
_______________________________________________
ceph-users mailing list
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Loading...