Discussion:
[ceph-users] rbd bench error
ST Wong (ITSC)
2018-11-15 12:09:55 UTC
Permalink
Hi,

We're trying to test rbd on a small CEPH running on VM: 8 OSD, 3 mon+mgr using rbd bench on 2 rbd from 2 pools with different replication setting:

For pool 4copy:

---------------------------
rule 4copy_rule {
id 1
type replicated
min_size 2
max_size 10
step take default
step choose firstn 2 type datacenter
step chooseleaf firstn 2 type host
step emit
}

For pool 2copy:

rule 2copy_rule {
id 2
type replicated
min_size 1
max_size 10
step take default
step choose firstn 2 type datacenter
step chooseleaf firstn 1 type host
step emit
}
---------------------------

Rbd bench completed 'normally' on the 2copy pool rbd. But got error for the 4copy:

------------- cut here ------------
# rbd bench --io-type rw --io-total 1073741824 4copy/foo
bench type readwrite read:write=50:50 io_size 4096 io_threads 16 bytes 1073741824 pattern sequential
SEC OPS OPS/SEC BYTES/SEC
1 5232 5311.74 21756891.99

[snipped]

33 27584 531.19 2175742.58
34 27920 560.05 2293957.89
35 28272 462.05 1892551.92
36 28624 359.65 1473143.60
37 28672 319.51 1308725.83
38 28736 227.08 930138.44
39 28800 175.16 717452.27
70 28832 15.88 65061.98
2018-11-12 14:40:36.182 7f5893fff700 -1 librbd::ImageWatcher: 0x7f5880002f40 image watch failed: 140018215563856, (107) Transport endpoint is not connected
2018-11-12 14:40:36.182 7f5893fff700 -1 librbd::Watcher: 0x7f5880002f40 handle_error: handle=140018215563856: (107) Transport endpoint is not connected
74 28848 5.85 23980.14
75 28864 5.04 20646.14
76 28896 4.20 17193.38
77 28944 3.79 15509.03
78 31984 413.70 1694525.89
79 36880 1809.82 7413040.55
80 38000 1980.06 8110329.05
------------- cut here ------------

Will it be performance issue sice it's running on VM?
Would like to know how to get more information for trouble-shooting.

Thanks a lot.
Best Regards,
/stwong

Loading...