Discussion:
[ceph-users] Migration osds to Bluestore on Ubuntu 14.04 Trusty
Klimenko, Roman
2018-11-15 13:57:07 UTC
Permalink
Hi everyone!

As I noticed, ceph-volume lacks Ubuntu Trusty compatibility https://tracker.ceph.com/issues/23496

So, I can't follow this instruction http://docs.ceph.com/docs/mimic/rados/operations/bluestore-migration/

Do I have any other option to migrate my Filestore osds (Luminous 12.2.9) to Bluestore?

P.S This is a test environment, so I can try anything
Alfredo Deza
2018-11-15 17:16:48 UTC
Permalink
On Thu, Nov 15, 2018 at 8:57 AM Klimenko, Roman <***@itkey.com> wrote:
>
> Hi everyone!
>
> As I noticed, ceph-volume lacks Ubuntu Trusty compatibility https://tracker.ceph.com/issues/23496
>
> So, I can't follow this instruction http://docs.ceph.com/docs/mimic/rados/operations/bluestore-migration/
>
> Do I have any other option to migrate my Filestore osds (Luminous 12.2.9) to Bluestore?
>
> P.S This is a test environment, so I can try anything

You could just use ceph-disk, but the way ceph-volume does bluestore
is more robust. I would try really hard to upgrade the OS so that you
can rely on ceph-volume

>
> _______________________________________________
> ceph-users mailing list
> ceph-***@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Klimenko, Roman
2018-11-16 07:09:13 UTC
Permalink
Ok, thx, I'll try ceph-disk.
________________________________________
От: Alfredo Deza <***@redhat.com>
Отправлено: 15 ноября 2018 г. 20:16
Кому: Klimenko, Roman
Копия: ceph-***@lists.ceph.com
Тема: Re: [ceph-users] Migration osds to Bluestore on Ubuntu 14.04 Trusty

On Thu, Nov 15, 2018 at 8:57 AM Klimenko, Roman <***@itkey.com> wrote:
>
> Hi everyone!
>
> As I noticed, ceph-volume lacks Ubuntu Trusty compatibility https://tracker.ceph.com/issues/23496
>
> So, I can't follow this instruction http://docs.ceph.com/docs/mimic/rados/operations/bluestore-migration/
>
> Do I have any other option to migrate my Filestore osds (Luminous 12.2.9) to Bluestore?
>
> P.S This is a test environment, so I can try anything

You could just use ceph-disk, but the way ceph-volume does bluestore
is more robust. I would try really hard to upgrade the OS so that you
can rely on ceph-volume

>
> _______________________________________________
> ceph-users mailing list
> ceph-***@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Zhenshi Zhou
2018-11-16 08:01:59 UTC
Permalink
Hi Klimenko,

I did a migration from filestore to bluestroe on centos7 with ceph version
12.2.5.
As it's the pro environment, I removed and recreated OSDs on each server at
a time, online.
Although I migreated on centos, I create osd manually so that you can have
a try.

Except one raid1 disk for system, my server has one SSD(sdg) for wal/db,
and 5 SATAs(sdb-sdf)
for storage. Here is my steps of replacing osd.16 and osd.17:

1. set osds out and remove them from cluster
ceph osd out 16 && ceph osd out 17
systemctl stop ceph-***@16 && systemctl stop ceph-***@17
ceph osd crush remove osd.16 && ceph osd crush remove osd.17
ceph osd rm osd.16 && ceph osd rm osd.17
ps. if you replace few disks you don't have to wait until the recovery done
as the cluster would heal itself.

2. clear the partition infomation on the disks
sgdisk --zap-all /dev/sdb
sgdisk --zap-all /dev/sdc

3. make meta partitions
sgdisk --new=1:0:+1GB --change-name=1:osd_data_16
--partition-guid=1:$(uuidgen) --mbrtogpt -- /dev/sdb
sgdisk --largest-new=2 --change-name=2:bluestore_block_16
--partition-guid=2:$(uuidgen) --mbrtogpt -- /dev/sdb
sgdisk --new=1:0:+1GB --change-name=1:osd_data_17
--partition-guid=1:$(uuidgen) --mbrtogpt -- /dev/sdc
sgdisk --largest-new=2 --change-name=2:bluestore_block_17
--partition-guid=2:$(uuidgen) --mbrtogpt -- /dev/sdc

4. format partitions
mkfs -t xfs -f -i size=2048 -- /dev/sdb1
mkfs -t xfs -f -i size=2048 -- /dev/sdc1

5. make wal/db partitions
sgdisk --new=1:0:+1GB --change-name=1:bluestore_block_db_16
--partition-guid=1:$(uuidgen) --mbrtogpt -- /dev/sdg
sgdisk --new=2:0:+8GB --change-name=2:bluestore_block_wal_16
--partition-guid=2:$(uuidgen) --mbrtogpt -- /dev/sdg
sgdisk --new=3:0:+1GB --change-name=3:bluestore_block_db_17
--partition-guid=3:$(uuidgen) --mbrtogpt -- /dev/sdg
sgdisk --new=4:0:+8GB --change-name=4:bluestore_block_wal_17
--partition-guid=4:$(uuidgen) --mbrtogpt -- /dev/sdg

6. create osd
ceph osd create

7. prepare the osd
mount /dev/sdb1 /var/lib/ceph/osd/ceph-16
mount /dev/sdc1 /var/lib/ceph/osd/ceph-17
echo "bluestore" > /var/lib/ceph/osd/ceph-16/type
echo "bluestore" > /var/lib/ceph/osd/ceph-17/type

8. edit ceph.conf

[osd.16]
host = ceph-osd1
osd data = /var/lib/ceph/osd/ceph-16/
bluestore block path = /dev/disk/by-partlabel/bluestore_block_16
bluestore block db path = /dev/disk/by-partlabel/bluestore_block_db_16
bluestore block wal path = /dev/disk/by-partlabel/bluestore_block_wal_16

[osd.17]
host = ceph-osd1
osd data = /var/lib/ceph/osd/ceph-17/
bluestore block path = /dev/disk/by-partlabel/bluestore_block_17
bluestore block db path = /dev/disk/by-partlabel/bluestore_block_db_17
bluestore block wal path = /dev/disk/by-partlabel/bluestore_block_wal_17

9. make keys

ceph-osd -i 16 --mkkey --mkfs
ceph-osd -i 17 --mkkey --mkfs

10. authorize

ceph auth add osd.16 osd 'allow *' mon 'allow profile osd' mgr 'allow
profile osd' -i /var/lib/ceph/osd/ceph-16/keyring
ceph auth add osd.17 osd 'allow *' mon 'allow profile osd' mgr 'allow
profile osd' -i /var/lib/ceph/osd/ceph-17/keyring

11. edit crushmap
ceph osd crush add 16 1 host=ceph-osd1
ceph osd crush add 17 1 host=ceph-osd1

12. start OSDs

Last but not least, you may edit udev rules as well as chown ceph osd
directories so that ceph has
permisson on wiriting the disks and directories.

I hope it helps. Thanks

Klimenko, Roman <***@itkey.com> 于2018幎11月16日呚五 䞊午8:35写道

> Hi everyone!
>
> As I noticed, ceph-volume lacks Ubuntu Trusty compatibility
> https://tracker.ceph.com/issues/23496
>
> So, I can't follow this instruction
> http://docs.ceph.com/docs/mimic/rados/operations/bluestore-migration/
>
> Do I have any other option to migrate my Filestore osds (Luminous 12.2.9)
> to Bluestore?
>
> P.S This is a test environment, so I can try anything
>
> _______________________________________________
> ceph-users mailing list
> ceph-***@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
Loading...