Jay Janardhan
2014-05-19 13:42:06 UTC
(Sorry if this is a duplicate message - email server is acting up this
morning).
I'm following quick start guide and have a ceph cluster with three nodes.
When I try to map image to block device my command hangs. This seems like a
kernel hang as the only way I was able to get out is via a hard reset of
the image. The following is my configuration. Any help is greatly
appreciated.
command on the ceph-client node (that hangs):
$ sudo rbd map foo1 --pool rbd --name client.admin
*ceph-client node info:*
$ rbd info foo1
rbd image 'foo1':
size 4096 MB in 1024 objects
order 22 (4096 kB objects)
block_name_prefix: rb.0.1050.74b0dc51
format: 1
Kernel and Ubuntu release:
$ uname -r
3.6.9-030609-generic
$ lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 12.04 LTS
Release: 12.04
Codename: precise
logs from /var/log/syslog:
May 17 14:12:48 ceph-client kernel: [ 128.866445] Key type ceph registered
May 17 14:12:48 ceph-client kernel: [ 128.866453] libceph: loaded (mon/osd
proto 15/24, osdmap 5/6 5/6)
May 17 14:12:48 ceph-client kernel: [ 128.867313] rbd: loaded rbd (rados
block device)
*ceph-node1 node info (node2 and node3 are similar):*
$ uname -r
3.2.0-23-generic
$ lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 12.04 LTS
Release: 12.04
Codename: precise
$ ceph status
cluster df4f503a-04a9-4572-96d3-e31218592cfa
health HEALTH_OK
monmap e1: 1 mons at {ceph-node1=192.168.56.102:6789/0}, election
epoch 2, quorum 0 ceph-node1
osdmap e60: 3 osds: 3 up, 3 in
pgmap v1168: 192 pgs, 3 pools, 1373 bytes data, 4 objects
22739 MB used, 202 GB / 236 GB avail
192 active+clean
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ceph.com/pipermail/ceph-users-ceph.com/attachments/20140519/0f29e0f9/attachment.htm>
morning).
I'm following quick start guide and have a ceph cluster with three nodes.
When I try to map image to block device my command hangs. This seems like a
kernel hang as the only way I was able to get out is via a hard reset of
the image. The following is my configuration. Any help is greatly
appreciated.
command on the ceph-client node (that hangs):
$ sudo rbd map foo1 --pool rbd --name client.admin
*ceph-client node info:*
$ rbd info foo1
rbd image 'foo1':
size 4096 MB in 1024 objects
order 22 (4096 kB objects)
block_name_prefix: rb.0.1050.74b0dc51
format: 1
Kernel and Ubuntu release:
$ uname -r
3.6.9-030609-generic
$ lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 12.04 LTS
Release: 12.04
Codename: precise
logs from /var/log/syslog:
May 17 14:12:48 ceph-client kernel: [ 128.866445] Key type ceph registered
May 17 14:12:48 ceph-client kernel: [ 128.866453] libceph: loaded (mon/osd
proto 15/24, osdmap 5/6 5/6)
May 17 14:12:48 ceph-client kernel: [ 128.867313] rbd: loaded rbd (rados
block device)
*ceph-node1 node info (node2 and node3 are similar):*
$ uname -r
3.2.0-23-generic
$ lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 12.04 LTS
Release: 12.04
Codename: precise
$ ceph status
cluster df4f503a-04a9-4572-96d3-e31218592cfa
health HEALTH_OK
monmap e1: 1 mons at {ceph-node1=192.168.56.102:6789/0}, election
epoch 2, quorum 0 ceph-node1
osdmap e60: 3 osds: 3 up, 3 in
pgmap v1168: 192 pgs, 3 pools, 1373 bytes data, 4 objects
22739 MB used, 202 GB / 236 GB avail
192 active+clean
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ceph.com/pipermail/ceph-users-ceph.com/attachments/20140519/0f29e0f9/attachment.htm>