I can only speak for some environments, but sometimes, you would want to
make sure that a cluster cannot fill up until you can add more capacity.
Some organizations are unable to purchase new capacity rapidly and making
sure you cannot exceed your current capacity, then you can't run into
problems.
It may also come from an understanding that thick provisioning will provide
more performance initially like virtual machines environment.
Having said all of this, isn't there a way to make sure the cluster can
accommodate the size of all RBD images that are created. And ensure they
have the space available? Some service availability might depend on making
sure the storage can provide the necessary capacity.
I'm assuming that this is all from an understanding that it is more costly
to run such type of environments, however, you can also guarantee that you
will never fill up unexpectedly your cluster.
Sam
Post by Jason DillamanThere is no existing option to thick provision images within RBD. When
an image is created or cloned, the only actions that occur are some
small metadata updates to describe the image. This allows image
creation to be a quick, constant time operation regardless of the
image size. To thick provision the entire image would require writing
data to the entire image and ensuring discard support is disabled to
prevent the OS from releasing space back (and thus re-sparsifying the
image).
Indeed. It makes me wonder why anybody would want it. It will:
- Impact recovery performance
- Impact scrubbing performance
- Utilize more space then needed
Why would you want to do this Sinan?
Wido
Post by Jason DillamanPost by s***@turka.nlHi,
I have deployed a Ceph cluster (Jewel). By default all block devices that
are created are thin provisioned.
Is it possible to change this setting? I would like to have that all
created block devices are thick provisioned.
In front of the Ceph cluster, I am running Openstack.
Thanks!
Sinan
_______________________________________________
ceph-users mailing list
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
--
Jason
_______________________________________________
ceph-users mailing list
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com