Discussion:
[ceph-users] cephfs compression?
Youzhong Yang
2018-06-29 02:43:48 UTC
Permalink
For RGW, compression works very well. We use rgw to store crash dumps, in
most cases, the compression ratio is about 2.0 ~ 4.0.

I tried to enable compression for cephfs data pool:

# ceph osd pool get cephfs_data all | grep ^compression
compression_mode: force
compression_algorithm: lz4
compression_required_ratio: 0.95
compression_max_blob_size: 4194304
compression_min_blob_size: 4096

(we built ceph packages and enabled lz4.)

It doesn't seem to work. I copied a 8.7GB folder to cephfs, ceph df says it
used 8.7GB:

***@ceph-admin:~# ceph df
GLOBAL:
SIZE AVAIL RAW USED %RAW USED
16 TiB 16 TiB 111 GiB 0.69
POOLS:
NAME ID USED %USED MAX AVAIL OBJECTS
cephfs_data 1 8.7 GiB 0.17 5.0 TiB 360545
cephfs_metadata 2 221 MiB 0 5.0 TiB 77707

I know this folder can be compressed to ~4.0GB under zfs lz4 compression.

Am I missing anything? how to make cephfs compression work? is there any
trick?

By the way, I am evaluating ceph mimic v13.2.0.

Thanks in advance,
--Youzhong
Richard Bade
2018-06-29 05:08:18 UTC
Permalink
I'm using compression on a cephfs-data pool in luminous. I didn't do
anything special

$ sudo ceph osd pool get cephfs-data all | grep ^compression
compression_mode: aggressive
compression_algorithm: zlib

You can check how much compression you're getting on the osd's
$ for osd in `seq 0 11`; do echo osd.$osd; sudo ceph daemon osd.$osd
perf dump | grep 'bluestore_compressed'; done
osd.0
"bluestore_compressed": 686487948225,
"bluestore_compressed_allocated": 788659830784,
"bluestore_compressed_original": 1660064620544,
<snip>
osd.11
"bluestore_compressed": 700999601387,
"bluestore_compressed_allocated": 808854355968,
"bluestore_compressed_original": 1752045551616,

I can't say for mimic, but definitely for luminous v12.2.5 compression
is working well with mostly default options.

-Rich
Post by Youzhong Yang
For RGW, compression works very well. We use rgw to store crash dumps, in
most cases, the compression ratio is about 2.0 ~ 4.0.
# ceph osd pool get cephfs_data all | grep ^compression
compression_mode: force
compression_algorithm: lz4
compression_required_ratio: 0.95
compression_max_blob_size: 4194304
compression_min_blob_size: 4096
(we built ceph packages and enabled lz4.)
It doesn't seem to work. I copied a 8.7GB folder to cephfs, ceph df says it
root at ceph-admin:~# ceph df
SIZE AVAIL RAW USED %RAW USED
16 TiB 16 TiB 111 GiB 0.69
NAME ID USED %USED MAX AVAIL OBJECTS
cephfs_data 1 8.7 GiB 0.17 5.0 TiB 360545
cephfs_metadata 2 221 MiB 0 5.0 TiB 77707
I know this folder can be compressed to ~4.0GB under zfs lz4 compression.
Am I missing anything? how to make cephfs compression work? is there any
trick?
Post by Youzhong Yang
By the way, I am evaluating ceph mimic v13.2.0.
Thanks in advance,
--Youzhong
Richard Bade
2018-06-29 05:15:59 UTC
Permalink
Oh, also because the compression is at the osd level you don't see it
in ceph df. You just see that your RAW is not increasing as much as
you'd expect. E.g.
$ sudo ceph df
GLOBAL:
SIZE AVAIL RAW USED %RAW USED
785T 300T 485T 61.73
POOLS:
NAME ID USED %USED MAX AVAIL OBJECTS
cephfs-metadata 11 185M 0 68692G 178
cephfs-data 12 408T 75.26 134T 132641159

You can see that we've used 408TB in the pool but only 485TB RAW -
Rather than ~600TB RAW that I'd expect for my k4, m2 pool settings.
Post by Richard Bade
I'm using compression on a cephfs-data pool in luminous. I didn't do
anything special
$ sudo ceph osd pool get cephfs-data all | grep ^compression
compression_mode: aggressive
compression_algorithm: zlib
You can check how much compression you're getting on the osd's
$ for osd in `seq 0 11`; do echo osd.$osd; sudo ceph daemon osd.$osd
perf dump | grep 'bluestore_compressed'; done
osd.0
"bluestore_compressed": 686487948225,
"bluestore_compressed_allocated": 788659830784,
"bluestore_compressed_original": 1660064620544,
<snip>
osd.11
"bluestore_compressed": 700999601387,
"bluestore_compressed_allocated": 808854355968,
"bluestore_compressed_original": 1752045551616,
I can't say for mimic, but definitely for luminous v12.2.5 compression
is working well with mostly default options.
-Rich
Post by Youzhong Yang
For RGW, compression works very well. We use rgw to store crash dumps, in
most cases, the compression ratio is about 2.0 ~ 4.0.
# ceph osd pool get cephfs_data all | grep ^compression
compression_mode: force
compression_algorithm: lz4
compression_required_ratio: 0.95
compression_max_blob_size: 4194304
compression_min_blob_size: 4096
(we built ceph packages and enabled lz4.)
It doesn't seem to work. I copied a 8.7GB folder to cephfs, ceph df says it
root at ceph-admin:~# ceph df
SIZE AVAIL RAW USED %RAW USED
16 TiB 16 TiB 111 GiB 0.69
NAME ID USED %USED MAX AVAIL OBJECTS
cephfs_data 1 8.7 GiB 0.17 5.0 TiB 360545
cephfs_metadata 2 221 MiB 0 5.0 TiB 77707
I know this folder can be compressed to ~4.0GB under zfs lz4 compression.
Am I missing anything? how to make cephfs compression work? is there any
trick?
Post by Youzhong Yang
By the way, I am evaluating ceph mimic v13.2.0.
Thanks in advance,
--Youzhong
Youzhong Yang
2018-06-29 13:41:03 UTC
Permalink
Thanks Richard. Yes, it seems working by perf dump:

osd.6
"bluestore_compressed": 62622444,
"bluestore_compressed_allocated": 186777600,
"bluestore_compressed_original": 373555200,

It's very interesting that bluestore_compressed_allocated is approximately
50% of bluestore_compressed_original across all OSDs, just curious - why?
Post by Richard Bade
Oh, also because the compression is at the osd level you don't see it
in ceph df. You just see that your RAW is not increasing as much as
you'd expect. E.g.
$ sudo ceph df
SIZE AVAIL RAW USED %RAW USED
785T 300T 485T 61.73
NAME ID USED %USED MAX AVAIL OBJECTS
cephfs-metadata 11 185M 0 68692G 178
cephfs-data 12 408T 75.26 134T 132641159
You can see that we've used 408TB in the pool but only 485TB RAW -
Rather than ~600TB RAW that I'd expect for my k4, m2 pool settings.
Post by Richard Bade
I'm using compression on a cephfs-data pool in luminous. I didn't do
anything special
$ sudo ceph osd pool get cephfs-data all | grep ^compression
compression_mode: aggressive
compression_algorithm: zlib
You can check how much compression you're getting on the osd's
$ for osd in `seq 0 11`; do echo osd.$osd; sudo ceph daemon osd.$osd
perf dump | grep 'bluestore_compressed'; done
osd.0
"bluestore_compressed": 686487948225,
"bluestore_compressed_allocated": 788659830784,
"bluestore_compressed_original": 1660064620544,
<snip>
osd.11
"bluestore_compressed": 700999601387,
"bluestore_compressed_allocated": 808854355968,
"bluestore_compressed_original": 1752045551616,
I can't say for mimic, but definitely for luminous v12.2.5 compression
is working well with mostly default options.
-Rich
Post by Youzhong Yang
For RGW, compression works very well. We use rgw to store crash dumps,
in
Post by Richard Bade
Post by Youzhong Yang
most cases, the compression ratio is about 2.0 ~ 4.0.
# ceph osd pool get cephfs_data all | grep ^compression
compression_mode: force
compression_algorithm: lz4
compression_required_ratio: 0.95
compression_max_blob_size: 4194304
compression_min_blob_size: 4096
(we built ceph packages and enabled lz4.)
It doesn't seem to work. I copied a 8.7GB folder to cephfs, ceph df
says it
Post by Richard Bade
Post by Youzhong Yang
root at ceph-admin:~# ceph df
SIZE AVAIL RAW USED %RAW USED
16 TiB 16 TiB 111 GiB 0.69
NAME ID USED %USED MAX AVAIL
OBJECTS
Post by Richard Bade
Post by Youzhong Yang
cephfs_data 1 8.7 GiB 0.17 5.0 TiB
360545
Post by Richard Bade
Post by Youzhong Yang
cephfs_metadata 2 221 MiB 0 5.0 TiB
77707
Post by Richard Bade
Post by Youzhong Yang
I know this folder can be compressed to ~4.0GB under zfs lz4
compression.
Post by Richard Bade
Post by Youzhong Yang
Am I missing anything? how to make cephfs compression work? is there
any
Post by Richard Bade
trick?
Post by Youzhong Yang
By the way, I am evaluating ceph mimic v13.2.0.
Thanks in advance,
--Youzhong
_______________________________________________
ceph-users mailing list
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Loading...