Youzhong Yang
2018-06-29 02:43:48 UTC
For RGW, compression works very well. We use rgw to store crash dumps, in
most cases, the compression ratio is about 2.0 ~ 4.0.
I tried to enable compression for cephfs data pool:
# ceph osd pool get cephfs_data all | grep ^compression
compression_mode: force
compression_algorithm: lz4
compression_required_ratio: 0.95
compression_max_blob_size: 4194304
compression_min_blob_size: 4096
(we built ceph packages and enabled lz4.)
It doesn't seem to work. I copied a 8.7GB folder to cephfs, ceph df says it
used 8.7GB:
***@ceph-admin:~# ceph df
GLOBAL:
SIZE AVAIL RAW USED %RAW USED
16 TiB 16 TiB 111 GiB 0.69
POOLS:
NAME ID USED %USED MAX AVAIL OBJECTS
cephfs_data 1 8.7 GiB 0.17 5.0 TiB 360545
cephfs_metadata 2 221 MiB 0 5.0 TiB 77707
I know this folder can be compressed to ~4.0GB under zfs lz4 compression.
Am I missing anything? how to make cephfs compression work? is there any
trick?
By the way, I am evaluating ceph mimic v13.2.0.
Thanks in advance,
--Youzhong
most cases, the compression ratio is about 2.0 ~ 4.0.
I tried to enable compression for cephfs data pool:
# ceph osd pool get cephfs_data all | grep ^compression
compression_mode: force
compression_algorithm: lz4
compression_required_ratio: 0.95
compression_max_blob_size: 4194304
compression_min_blob_size: 4096
(we built ceph packages and enabled lz4.)
It doesn't seem to work. I copied a 8.7GB folder to cephfs, ceph df says it
used 8.7GB:
***@ceph-admin:~# ceph df
GLOBAL:
SIZE AVAIL RAW USED %RAW USED
16 TiB 16 TiB 111 GiB 0.69
POOLS:
NAME ID USED %USED MAX AVAIL OBJECTS
cephfs_data 1 8.7 GiB 0.17 5.0 TiB 360545
cephfs_metadata 2 221 MiB 0 5.0 TiB 77707
I know this folder can be compressed to ~4.0GB under zfs lz4 compression.
Am I missing anything? how to make cephfs compression work? is there any
trick?
By the way, I am evaluating ceph mimic v13.2.0.
Thanks in advance,
--Youzhong