Discussion:
[ceph-users] CEPH DR RBD Mount
Vikas Rana
2018-11-27 13:48:48 UTC
Permalink
Hi There,

We are replicating a 100TB RBD image to DR site. Replication works fine.

rbd --cluster cephdr mirror pool status nfs --verbose

health: OK

images: 1 total

1 replaying



dir_research:

global_id: 11e9cbb9-ce83-4e5e-a7fb-472af866ca2d

state: up+replaying

description: replaying, master_position=[object_number=591701, tag_tid=1,
entry_tid=902879873], mirror_position=[object_number=446354, tag_tid=1,
entry_tid=727653146], entries_behind_master=175226727

last_update: 2018-11-14 16:17:23




We then, use nbd to map the RBD image at the DR site but when we try to
mount it, we get


# mount /dev/nbd2 /mnt

mount: block device /dev/nbd2 is write-protected, mounting read-only

*mount: /dev/nbd2: can't read superblock*



We are using 12.2.8.


Any help will be greatly appreciated.


Thanks,

-Vikas
David C
2018-11-30 16:24:11 UTC
Permalink
Is that one big xfs filesystem? Are you able to mount with krbd?
Post by Vikas Rana
Hi There,
We are replicating a 100TB RBD image to DR site. Replication works fine.
rbd --cluster cephdr mirror pool status nfs --verbose
health: OK
images: 1 total
1 replaying
global_id: 11e9cbb9-ce83-4e5e-a7fb-472af866ca2d
state: up+replaying
description: replaying, master_position=[object_number=591701,
tag_tid=1, entry_tid=902879873], mirror_position=[object_number=446354,
tag_tid=1, entry_tid=727653146], entries_behind_master=175226727
last_update: 2018-11-14 16:17:23
We then, use nbd to map the RBD image at the DR site but when we try to
mount it, we get
# mount /dev/nbd2 /mnt
mount: block device /dev/nbd2 is write-protected, mounting read-only
*mount: /dev/nbd2: can't read superblock*
We are using 12.2.8.
Any help will be greatly appreciated.
Thanks,
-Vikas
_______________________________________________
ceph-users mailing list
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Jason Dillaman
2018-12-03 14:52:48 UTC
Permalink
FYI -- that "entries_behind_master=175226727" bit is telling you that
it has only mirrored about 80% of the recent changes from primary to
non-primary.

Was the filesystem already in place? Are their any partitions/LVM
volumes in-use on the device? Did you map the volume read-only?
Post by Vikas Rana
Hi There,
We are replicating a 100TB RBD image to DR site. Replication works fine.
rbd --cluster cephdr mirror pool status nfs --verbose
health: OK
images: 1 total
1 replaying
global_id: 11e9cbb9-ce83-4e5e-a7fb-472af866ca2d
state: up+replaying
description: replaying, master_position=[object_number=591701, tag_tid=1, entry_tid=902879873], mirror_position=[object_number=446354, tag_tid=1, entry_tid=727653146], entries_behind_master=175226727
last_update: 2018-11-14 16:17:23
We then, use nbd to map the RBD image at the DR site but when we try to mount it, we get
# mount /dev/nbd2 /mnt
mount: block device /dev/nbd2 is write-protected, mounting read-only
mount: /dev/nbd2: can't read superblock
We are using 12.2.8.
Any help will be greatly appreciated.
Thanks,
-Vikas
_______________________________________________
ceph-users mailing list
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
--
Jason
Loading...