Discussion:
[ceph-users] Errors when creating new pool
Orbiting Code, Inc.
2018-12-05 19:00:02 UTC
Permalink
Hello Everyone,

I have a newly re-created CEPH cluster and cannot create a new pool. I'm
using the following syntax, which has previously worked without issue in
the past:

ceph osd pool create rbd 1024 1024

The resulting error is:

"Error ERANGE: For better initial performance on pools expected to store
a large number of objects, consider supplying the expected_num_objects
parameter when creating the pool."

I have read the documentation which refers to "expected_num_objects",
but cannot figure out what the correct syntax should look like. I have
tried the following variation:

ceph osd pool create rbd 1024 1024 replicated
not_sure_what_to_define_for_the_crush_rule_set_name 0

But this results in an error stating that the crush rule set doesn't
exist. I have not had to define the crush rule set in previous testing.

If anyone can shed some light on this, I would be most appreciative.

Thank you,
Todd
--
Paul Emmerich
2018-12-05 20:17:27 UTC
Permalink
I think it's new in 12.2.10, but it should only show up when using
Filestore OSDs. Since you mention that the cluster is new: are you not
using Bluestore?

That being said: the default crush rule name is "replicated_rule", so
"ceph osd pool create <name> <pg> <pg> replicated_rule
<expected_objects>" is the right way to create a pool on a filestore
cluster now.

I think there's some room for improvement from a user experience point
of view...

Paul
--
Paul Emmerich

Looking for help with your Ceph cluster? Contact us at https://croit.io

croit GmbH
Freseniusstr. 31h
81247 München
www.croit.io
Tel: +49 89 1896585 90

Am Mi., 5. Dez. 2018 um 20:00 Uhr schrieb Orbiting Code, Inc.
Post by Orbiting Code, Inc.
Hello Everyone,
ceph osd pool create rbd 1024 1024
"Error ERANGE: For better initial performance on pools expected to store a large number of objects, consider supplying the expected_num_objects parameter when creating the pool."
ceph osd pool create rbd 1024 1024 replicated not_sure_what_to_define_for_the_crush_rule_set_name 0
But this results in an error stating that the crush rule set doesn't exist. I have not had to define the crush rule set in previous testing.
If anyone can shed some light on this, I would be most appreciative.
Thank you,
Todd
--
_______________________________________________
ceph-users mailing list
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Paul Emmerich
2018-12-05 23:18:43 UTC
Permalink
re-adding mailing list.

I've had a quick look at the code and the logic for the
expected_num_objects seems broken, it uses the wrong way to detect
filestore OSDs.

I've opened an issue: http://tracker.ceph.com/issues/37532

The new error is just that you probably didn't restart your mons after
setting this option. Try to run

ceph tell mon.\* injectargs '--osd_pool_default_size=1'
--
Paul Emmerich

Looking for help with your Ceph cluster? Contact us at https://croit.io

croit GmbH
Freseniusstr. 31h
81247 München
www.croit.io
Tel: +49 89 1896585 90
[osd1][INFO ] Running command: sudo /usr/sbin/ceph-volume --cluster ceph lvm create --bluestore --data /dev/sdg
[osd1][DEBUG ] Running command: ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph-5be28a76-3e74-4bcf-bb9e-c6662b961a20/osd-block-b75f1905-fb4c-4ecc-ac3a-391a755b364a --path /var/lib/ceph/osd/ceph-4
[osd1][DEBUG ] Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 4 --monmap /var/lib/ceph/osd/ceph-4/activate.monmap --keyfile - --osd-data /var/lib/ceph/osd/ceph-4/ --osd-uuid b75f1905-fb4c-4ecc-ac3a-391a755b364a --setuser ceph --setgroup ceph
From this, I'm assuming that Bluestore is the default journal now, since I did not specify it when adding the OSDs.
ceph osd pool create kvm 1024 1024 replicated_rule 100
Error ERANGE: pg_num 1024 size 3 would mean 3072 total pgs, which exceeds max 2000 (mon_max_pg_per_osd 250 * num_in_osds 8)
In my ceph.conf file, I have "osd pool default size = 1" for this test cluster, but in the error above, a size of 3 is coming from some unknown place. Also, I'm at a loss as to how I would possibly estimate expected_num_objects, so I picked an arbitrary value of 100. I also tried 0, which is the default according to the documentation.
Thank you again,
Todd
I think it's new in 12.2.10, but it should only show up when using
Filestore OSDs. Since you mention that the cluster is new: are you not
using Bluestore?
That being said: the default crush rule name is "replicated_rule", so
"ceph osd pool create <name> <pg> <pg> replicated_rule
<expected_objects>" is the right way to create a pool on a filestore
cluster now.
I think there's some room for improvement from a user experience point
of view...
Paul
Loading...