Discussion:
[ceph-users] Thick provisioning
s***@turka.nl
2017-10-16 14:49:14 UTC
Permalink
Hi,

I have deployed a Ceph cluster (Jewel). By default all block devices that
are created are thin provisioned.

Is it possible to change this setting? I would like to have that all
created block devices are thick provisioned.

In front of the Ceph cluster, I am running Openstack.

Thanks!

Sinan
Jason Dillaman
2017-10-17 17:38:51 UTC
Permalink
There is no existing option to thick provision images within RBD. When
an image is created or cloned, the only actions that occur are some
small metadata updates to describe the image. This allows image
creation to be a quick, constant time operation regardless of the
image size. To thick provision the entire image would require writing
data to the entire image and ensuring discard support is disabled to
prevent the OS from releasing space back (and thus re-sparsifying the
image).
Post by s***@turka.nl
Hi,
I have deployed a Ceph cluster (Jewel). By default all block devices that
are created are thin provisioned.
Is it possible to change this setting? I would like to have that all
created block devices are thick provisioned.
In front of the Ceph cluster, I am running Openstack.
Thanks!
Sinan
_______________________________________________
ceph-users mailing list
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
--
Jason
Wido den Hollander
2017-10-18 06:19:58 UTC
Permalink
Post by Jason Dillaman
There is no existing option to thick provision images within RBD. When
an image is created or cloned, the only actions that occur are some
small metadata updates to describe the image. This allows image
creation to be a quick, constant time operation regardless of the
image size. To thick provision the entire image would require writing
data to the entire image and ensuring discard support is disabled to
prevent the OS from releasing space back (and thus re-sparsifying the
image).
Indeed. It makes me wonder why anybody would want it. It will:

- Impact recovery performance
- Impact scrubbing performance
- Utilize more space then needed

Why would you want to do this Sinan?

Wido
Post by Jason Dillaman
Post by s***@turka.nl
Hi,
I have deployed a Ceph cluster (Jewel). By default all block devices that
are created are thin provisioned.
Is it possible to change this setting? I would like to have that all
created block devices are thick provisioned.
In front of the Ceph cluster, I am running Openstack.
Thanks!
Sinan
_______________________________________________
ceph-users mailing list
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
--
Jason
_______________________________________________
ceph-users mailing list
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Samuel Soulard
2017-10-18 10:33:17 UTC
Permalink
I can only speak for some environments, but sometimes, you would want to
make sure that a cluster cannot fill up until you can add more capacity.

Some organizations are unable to purchase new capacity rapidly and making
sure you cannot exceed your current capacity, then you can't run into
problems.

It may also come from an understanding that thick provisioning will provide
more performance initially like virtual machines environment.

Having said all of this, isn't there a way to make sure the cluster can
accommodate the size of all RBD images that are created. And ensure they
have the space available? Some service availability might depend on making
sure the storage can provide the necessary capacity.

I'm assuming that this is all from an understanding that it is more costly
to run such type of environments, however, you can also guarantee that you
will never fill up unexpectedly your cluster.

Sam
Post by Jason Dillaman
There is no existing option to thick provision images within RBD. When
an image is created or cloned, the only actions that occur are some
small metadata updates to describe the image. This allows image
creation to be a quick, constant time operation regardless of the
image size. To thick provision the entire image would require writing
data to the entire image and ensuring discard support is disabled to
prevent the OS from releasing space back (and thus re-sparsifying the
image).
Indeed. It makes me wonder why anybody would want it. It will:

- Impact recovery performance
- Impact scrubbing performance
- Utilize more space then needed

Why would you want to do this Sinan?

Wido
Post by Jason Dillaman
Post by s***@turka.nl
Hi,
I have deployed a Ceph cluster (Jewel). By default all block devices that
are created are thin provisioned.
Is it possible to change this setting? I would like to have that all
created block devices are thick provisioned.
In front of the Ceph cluster, I am running Openstack.
Thanks!
Sinan
_______________________________________________
ceph-users mailing list
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
--
Jason
_______________________________________________
ceph-users mailing list
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
s***@turka.nl
2017-10-18 19:41:11 UTC
Permalink
Hi all,

Thanks for the replies.

The main reason why I was looking for the thin/thick provisioning setting
is that I want to be sure that provisioned space should not exceed the
cluster capacity.

With thin provisioning there is a risk that more space is provisioned than
the cluster capacity. When you monitor closely the real usage, this should
not be a problem; but from experience when there is no hard limit,
overprovisioning will happen at some point.

Sinan
Post by Samuel Soulard
I can only speak for some environments, but sometimes, you would want to
make sure that a cluster cannot fill up until you can add more capacity.
Some organizations are unable to purchase new capacity rapidly and making
sure you cannot exceed your current capacity, then you can't run into
problems.
It may also come from an understanding that thick provisioning will provide
more performance initially like virtual machines environment.
Having said all of this, isn't there a way to make sure the cluster can
accommodate the size of all RBD images that are created. And ensure they
have the space available? Some service availability might depend on making
sure the storage can provide the necessary capacity.
I'm assuming that this is all from an understanding that it is more costly
to run such type of environments, however, you can also guarantee that you
will never fill up unexpectedly your cluster.
Sam
Op 17 oktober 2017 om 19:38 schreef Jason Dillaman
There is no existing option to thick provision images within RBD. When
an image is created or cloned, the only actions that occur are some
small metadata updates to describe the image. This allows image
creation to be a quick, constant time operation regardless of the
image size. To thick provision the entire image would require writing
data to the entire image and ensuring discard support is disabled to
prevent the OS from releasing space back (and thus re-sparsifying the
image).
- Impact recovery performance
- Impact scrubbing performance
- Utilize more space then needed
Why would you want to do this Sinan?
Wido
Post by s***@turka.nl
Hi,
I have deployed a Ceph cluster (Jewel). By default all block devices
that
Post by s***@turka.nl
are created are thin provisioned.
Is it possible to change this setting? I would like to have that all
created block devices are thick provisioned.
In front of the Ceph cluster, I am running Openstack.
Thanks!
Sinan
_______________________________________________
ceph-users mailing list
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
--
Jason
_______________________________________________
ceph-users mailing list
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Adrian Saul
2017-10-19 00:04:06 UTC
Permalink
I concur - at the moment we need to manually sum the RBD images to look at how much we have "provisioned" vs what ceph df shows. in our case we had a rapid run of provisioning new LUNs but it took a while before usage started to catch up with what was provisioned as data was migrated in. Ceph df would show say only 20% of a pool used, but the actual RBD allocation was nearer 80+%

I am not sure if its workable but if there could be a pool level metric to track the total allocation of RBD images that would be useful. I imagine it gets tricky with snapshots/clones though.
-----Original Message-----
Sent: Thursday, 19 October 2017 6:41 AM
Subject: Re: [ceph-users] Thick provisioning
Hi all,
Thanks for the replies.
The main reason why I was looking for the thin/thick provisioning setting is
that I want to be sure that provisioned space should not exceed the cluster
capacity.
With thin provisioning there is a risk that more space is provisioned than the
cluster capacity. When you monitor closely the real usage, this should not be
a problem; but from experience when there is no hard limit, overprovisioning
will happen at some point.
Sinan
Post by Samuel Soulard
I can only speak for some environments, but sometimes, you would want
to make sure that a cluster cannot fill up until you can add more capacity.
Some organizations are unable to purchase new capacity rapidly and
making sure you cannot exceed your current capacity, then you can't
run into problems.
It may also come from an understanding that thick provisioning will
provide more performance initially like virtual machines environment.
Having said all of this, isn't there a way to make sure the cluster
can accommodate the size of all RBD images that are created. And
ensure they have the space available? Some service availability might
depend on making sure the storage can provide the necessary capacity.
I'm assuming that this is all from an understanding that it is more
costly to run such type of environments, however, you can also
guarantee that you will never fill up unexpectedly your cluster.
Sam
Op 17 oktober 2017 om 19:38 schreef Jason Dillaman
There is no existing option to thick provision images within RBD.
When an image is created or cloned, the only actions that occur are
some small metadata updates to describe the image. This allows image
creation to be a quick, constant time operation regardless of the
image size. To thick provision the entire image would require writing
data to the entire image and ensuring discard support is disabled to
prevent the OS from releasing space back (and thus re-sparsifying the
image).
- Impact recovery performance
- Impact scrubbing performance
- Utilize more space then needed
Why would you want to do this Sinan?
Wido
Post by s***@turka.nl
Hi,
I have deployed a Ceph cluster (Jewel). By default all block devices
that
Post by s***@turka.nl
are created are thin provisioned.
Is it possible to change this setting? I would like to have that
all created block devices are thick provisioned.
In front of the Ceph cluster, I am running Openstack.
Thanks!
Sinan
_______________________________________________
ceph-users mailing list
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
--
Jason
_______________________________________________
ceph-users mailing list
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Confidentiality: This email and any attachments are confidential and may be subject to copyright, legal or some other professional privilege. They are intended solely for the attention and use of the named addressee(s). They may only be copied, distributed or disclosed with the consent of the copyright owner. If you have received this email by mistake or by breach of the confidentiality clause, please notify the sender immediately by return email and delete or destroy all copies of the email. Any confidentiality, privilege or copyright is not waived or lost because this email has been sent to you by mistake.
Continue reading on narkive:
Loading...