Discussion:
[ceph-users] New OSD with weight 0, rebalance still happen...
Marco Gaiarin
2018-11-22 08:22:03 UTC
Permalink
Ceph still surprise me, when i'm sure i've fully understood it,
something 'strange' (to my knowledge) happen.


I need to move out a server of my ceph hammer cluster (3 nodes, 4 OSD
per node), and for some reasons i cannot simply move disks.
So i've added a new node, and yesterday i've setup the new 4 OSD.
In my mind i will add 4 OSD with weight 0, and then slowly i will lower
the old OSD weight and increase the weight of the new.

I've done before:

ceph osd set noin

and then added OSD, and (as expected) new OSD start with weight 0.

But, despite of the fact that weight is zero, rebalance happen, and
using percentage of rebalance 'weighted' to the size of new disk (eg,
i've had 18TB circa of space, i've added a 2TB disks and roughly 10% of
data start to rebalance).


Why? Thanks.

--
dott. Marco Gaiarin GNUPG Key ID: 240A3D66
Associazione ``La Nostra Famiglia'' http://www.lanostrafamiglia.it/
Polo FVG - Via della Bontà, 7 - 33078 - San Vito al Tagliamento (PN)
marco.gaiarin(at)lanostrafamiglia.it t +39-0434-842711 f +39-0434-842797

Dona il 5 PER MILLE a LA NOSTRA FAMIGLIA!
http://www.lanostrafamiglia.it/index.php/it/sostienici/5x1000
(cf 00307430132, categoria ONLUS oppure RICERCA SANITARIA)
Paweł Sadowsk
2018-11-22 08:26:41 UTC
Permalink
Hi Marco,

On 11/22/18 9:22 AM, Marco Gaiarin wrote:
>
> ...
> But, despite of the fact that weight is zero, rebalance happen, and
> using percentage of rebalance 'weighted' to the size of new disk (eg,
> i've had 18TB circa of space, i've added a 2TB disks and roughly 10% of
> data start to rebalance).
>
>
> Why? Thanks.
>

We did similar changes a many times and it always behave as expected.
Can you show you crushmap/ceph osd tree?

--
PS
Marco Gaiarin
2018-11-22 11:05:12 UTC
Permalink
Mandi! Paweł Sadowsk
In chel di` si favelave...

> We did similar changes a many times and it always behave as expected.

Ok. Good.

> Can you show you crushmap/ceph osd tree?

Sure!

***@blackpanther:~# ceph osd tree
ID WEIGHT TYPE NAME UP/DOWN REWEIGHT PRIMARY-AFFINITY
-1 21.83984 root default
-2 5.45996 host capitanamerica
0 1.81999 osd.0 up 1.00000 1.00000
1 1.81999 osd.1 up 1.00000 1.00000
10 0.90999 osd.10 up 1.00000 1.00000
11 0.90999 osd.11 up 1.00000 1.00000
-3 5.45996 host vedovanera
2 1.81999 osd.2 up 1.00000 1.00000
3 1.81999 osd.3 up 1.00000 1.00000
4 0.90999 osd.4 up 1.00000 1.00000
5 0.90999 osd.5 up 1.00000 1.00000
-4 5.45996 host deadpool
6 1.81999 osd.6 up 1.00000 1.00000
7 1.81999 osd.7 up 1.00000 1.00000
8 0.90999 osd.8 up 1.00000 1.00000
9 0.90999 osd.9 up 1.00000 1.00000
-5 5.45996 host blackpanther
12 1.81999 osd.12 up 0.04999 1.00000
13 1.81999 osd.13 up 0.04999 1.00000
14 0.90999 osd.14 up 0.04999 1.00000
15 0.90999 osd.15 up 0.04999 1.00000

OSD 12-15 are the new OSD; after creating it with 'noin' i've
reweighted them to '0.05' (to make a test).


Crush map attached. Thanks.

--
dott. Marco Gaiarin GNUPG Key ID: 240A3D66
Associazione ``La Nostra Famiglia'' http://www.lanostrafamiglia.it/
Polo FVG - Via della Bontà, 7 - 33078 - San Vito al Tagliamento (PN)
marco.gaiarin(at)lanostrafamiglia.it t +39-0434-842711 f +39-0434-842797

Dona il 5 PER MILLE a LA NOSTRA FAMIGLIA!
http://www.lanostrafamiglia.it/index.php/it/sostienici/5x1000
(cf 00307430132, categoria ONLUS oppure RICERCA SANITARIA)
Jarek
2018-11-22 11:22:43 UTC
Permalink
On Thu, 22 Nov 2018 12:05:12 +0100
Marco Gaiarin <***@sv.lnf.it> wrote:

> Mandi! Paweł Sadowsk
> In chel di` si favelave...
>
> > We did similar changes a many times and it always behave as
> > expected.
>
> Ok. Good.
>
> > Can you show you crushmap/ceph osd tree?
>
> Sure!
>
> ***@blackpanther:~# ceph osd tree
> ID WEIGHT TYPE NAME UP/DOWN REWEIGHT
> PRIMARY-AFFINITY -1 21.83984 root
> default -2 5.45996 host
> capitanamerica 0 1.81999 osd.0 up
> 1.00000 1.00000 1 1.81999 osd.1 up
> 1.00000 1.00000 10 0.90999 osd.10 up
> 1.00000 1.00000 11 0.90999 osd.11 up
> 1.00000 1.00000 -3 5.45996 host
> vedovanera 2 1.81999 osd.2 up
> 1.00000 1.00000 3 1.81999 osd.3 up
> 1.00000 1.00000 4 0.90999 osd.4 up
> 1.00000 1.00000 5 0.90999 osd.5 up
> 1.00000 1.00000 -4 5.45996 host
> deadpool 6 1.81999 osd.6 up 1.00000
> 1.00000 7 1.81999 osd.7 up 1.00000
> 1.00000 8 0.90999 osd.8 up 1.00000
> 1.00000 9 0.90999 osd.9 up 1.00000
> 1.00000 -5 5.45996 host
> blackpanther 12 1.81999 osd.12 up
> 0.04999 1.00000 13 1.81999 osd.13 up
> 0.04999 1.00000 14 0.90999 osd.14 up
> 0.04999 1.00000 15 0.90999 osd.15 up
> 0.04999 1.00000
>
> OSD 12-15 are the new OSD; after creating it with 'noin' i've
> reweighted them to '0.05' (to make a test).
>
>
> Crush map attached. Thanks.

When an osd is added, even with the noin flag, weight of the host is
changed, which trigers rebalance.
Instead of the noin flag, set 'osd crush initial weight = 0' in
ceph.conf.

--
Pozdrawiam
Jarosław Mociak - Nettelekom GK Sp. z o.o.
Zongyou Yao
2018-11-22 11:50:11 UTC
Permalink
The reason for the rebalance is you are using straw algorithms. If you swift to straw2, no data will be moved.

________________________________________
From: ceph-users <ceph-users-***@lists.ceph.com> on behalf of Jarek <***@nettelekom.pl>
Sent: Thursday, November 22, 2018 19:22
To: Marco Gaiarin
Cc: ceph-***@lists.ceph.com
Subject: Re: [ceph-users] New OSD with weight 0, rebalance still happen...

On Thu, 22 Nov 2018 12:05:12 +0100
Marco Gaiarin <***@sv.lnf.it> wrote:

> Mandi! Paweł Sadowsk
> In chel di` si favelave...
>
> > We did similar changes a many times and it always behave as
> > expected.
>
> Ok. Good.
>
> > Can you show you crushmap/ceph osd tree?
>
> Sure!
>
> ***@blackpanther:~# ceph osd tree
> ID WEIGHT TYPE NAME UP/DOWN REWEIGHT
> PRIMARY-AFFINITY -1 21.83984 root
> default -2 5.45996 host
> capitanamerica 0 1.81999 osd.0 up
> 1.00000 1.00000 1 1.81999 osd.1 up
> 1.00000 1.00000 10 0.90999 osd.10 up
> 1.00000 1.00000 11 0.90999 osd.11 up
> 1.00000 1.00000 -3 5.45996 host
> vedovanera 2 1.81999 osd.2 up
> 1.00000 1.00000 3 1.81999 osd.3 up
> 1.00000 1.00000 4 0.90999 osd.4 up
> 1.00000 1.00000 5 0.90999 osd.5 up
> 1.00000 1.00000 -4 5.45996 host
> deadpool 6 1.81999 osd.6 up 1.00000
> 1.00000 7 1.81999 osd.7 up 1.00000
> 1.00000 8 0.90999 osd.8 up 1.00000
> 1.00000 9 0.90999 osd.9 up 1.00000
> 1.00000 -5 5.45996 host
> blackpanther 12 1.81999 osd.12 up
> 0.04999 1.00000 13 1.81999 osd.13 up
> 0.04999 1.00000 14 0.90999 osd.14 up
> 0.04999 1.00000 15 0.90999 osd.15 up
> 0.04999 1.00000
>
> OSD 12-15 are the new OSD; after creating it with 'noin' i've
> reweighted them to '0.05' (to make a test).
>
>
> Crush map attached. Thanks.

When an osd is added, even with the noin flag, weight of the host is
changed, which trigers rebalance.
Instead of the noin flag, set 'osd crush initial weight = 0' in
ceph.conf.

--
Pozdrawiam
Jarosław Mociak - Nettelekom GK Sp. z o.o.
Marco Gaiarin
2018-11-22 13:39:15 UTC
Permalink
Mandi! Zongyou Yao
In chel di` si favelave...

> The reason for the rebalance is you are using straw algorithms. If you swift to straw2, no data will be moved.

I'm still on hammer, so:

http://docs.ceph.com/docs/hammer/rados/operations/crush-map/

seems there's no 'staw2'...

--
dott. Marco Gaiarin GNUPG Key ID: 240A3D66
Associazione ``La Nostra Famiglia'' http://www.lanostrafamiglia.it/
Polo FVG - Via della Bontà, 7 - 33078 - San Vito al Tagliamento (PN)
marco.gaiarin(at)lanostrafamiglia.it t +39-0434-842711 f +39-0434-842797

Dona il 5 PER MILLE a LA NOSTRA FAMIGLIA!
http://www.lanostrafamiglia.it/index.php/it/sostienici/5x1000
(cf 00307430132, categoria ONLUS oppure RICERCA SANITARIA)
Paweł Sadowsk
2018-11-22 14:23:17 UTC
Permalink
On 11/22/18 12:22 PM, Jarek wrote:
> On Thu, 22 Nov 2018 12:05:12 +0100
> Marco Gaiarin <***@sv.lnf.it> wrote:
>
>> Mandi! Paweł Sadowsk
>> In chel di` si favelave...
>>
>>> We did similar changes a many times and it always behave as
>>> expected.
>>
>> Ok. Good.
>>
>>> Can you show you crushmap/ceph osd tree?
>>
>> Sure!
>>
>> ***@blackpanther:~# ceph osd tree
>> ID WEIGHT TYPE NAME UP/DOWN REWEIGHT
>> PRIMARY-AFFINITY -1 21.83984 root
>> default -2 5.45996 host
>> capitanamerica 0 1.81999 osd.0 up
>> 1.00000 1.00000 1 1.81999 osd.1 up
>> 1.00000 1.00000 10 0.90999 osd.10 up
>> 1.00000 1.00000 11 0.90999 osd.11 up
>> 1.00000 1.00000 -3 5.45996 host
>> vedovanera 2 1.81999 osd.2 up
>> 1.00000 1.00000 3 1.81999 osd.3 up
>> 1.00000 1.00000 4 0.90999 osd.4 up
>> 1.00000 1.00000 5 0.90999 osd.5 up
>> 1.00000 1.00000 -4 5.45996 host
>> deadpool 6 1.81999 osd.6 up 1.00000
>> 1.00000 7 1.81999 osd.7 up 1.00000
>> 1.00000 8 0.90999 osd.8 up 1.00000
>> 1.00000 9 0.90999 osd.9 up 1.00000
>> 1.00000 -5 5.45996 host
>> blackpanther 12 1.81999 osd.12 up
>> 0.04999 1.00000 13 1.81999 osd.13 up
>> 0.04999 1.00000 14 0.90999 osd.14 up
>> 0.04999 1.00000 15 0.90999 osd.15 up
>> 0.04999 1.00000
>>
>> OSD 12-15 are the new OSD; after creating it with 'noin' i've
>> reweighted them to '0.05' (to make a test).
>>
>>
>> Crush map attached. Thanks.
>
> When an osd is added, even with the noin flag, weight of the host is
> changed, which trigers rebalance.
> Instead of the noin flag, set 'osd crush initial weight = 0' in
> ceph.conf.

Exactly, your 'new' OSD have weight 1.81999 (osd.12, osd.13) and 0.90999
(osd.14, osd.15). As Jarek pointed out you should add them using

'osd crush initial weight = 0'

and the use

'ceph osd crush reweight osd.x 0.05'

to slowly increase weight on them.

From your osd tree it looks like you used 'ceph osd reweight'.

--
PS
Marco Gaiarin
2018-11-22 17:12:06 UTC
Permalink
Mandi! Paweł Sadowsk
In chel di` si favelave...

> From your osd tree it looks like you used 'ceph osd reweight'.

Yes, and i supposed also to do the right things!

Now, i've tried to lower the to-dimissi OSD, using:
ceph osd reweight 2 0.95

leading to an osd map tree like:

***@blackpanther:~# ceph osd tree
ID WEIGHT TYPE NAME UP/DOWN REWEIGHT PRIMARY-AFFINITY
-1 21.83984 root default
-2 5.45996 host capitanamerica
0 1.81999 osd.0 up 1.00000 1.00000
1 1.81999 osd.1 up 1.00000 1.00000
10 0.90999 osd.10 up 1.00000 1.00000
11 0.90999 osd.11 up 1.00000 1.00000
-3 5.45996 host vedovanera
2 1.81999 osd.2 up 0.95000 1.00000
3 1.81999 osd.3 up 1.00000 1.00000
4 0.90999 osd.4 up 1.00000 1.00000
5 0.90999 osd.5 up 1.00000 1.00000
-4 5.45996 host deadpool
6 1.81999 osd.6 up 1.00000 1.00000
7 1.81999 osd.7 up 1.00000 1.00000
8 0.90999 osd.8 up 1.00000 1.00000
9 0.90999 osd.9 up 1.00000 1.00000
-5 5.45996 host blackpanther
12 1.81999 osd.12 up 0.04999 1.00000
13 1.81999 osd.13 up 0.04999 1.00000
14 0.90999 osd.14 up 0.04999 1.00000
15 0.90999 osd.15 up 0.04999 1.00000

and, after rebalancing, to:

***@blackpanther:~# ceph -s
cluster 8794c124-c2ec-4e81-8631-742992159bd6
health HEALTH_WARN
6 pgs stuck unclean
recovery 4/2550363 objects degraded (0.000%)
recovery 11282/2550363 objects misplaced (0.442%)
monmap e6: 6 mons at {0=10.27.251.7:6789/0,1=10.27.251.8:6789/0,2=10.27.251.11:6789/0,3=10.27.251.12:6789/0,4=10.27.251.9:6789/0,blackpanther=10.27.251.2:6789/0}
election epoch 2750, quorum 0,1,2,3,4,5 blackpanther,0,1,4,2,3
osdmap e7300: 16 osds: 16 up, 16 in; 6 remapped pgs
pgmap v54737590: 768 pgs, 3 pools, 3299 GB data, 830 kobjects
9870 GB used, 12474 GB / 22344 GB avail
4/2550363 objects degraded (0.000%)
11282/2550363 objects misplaced (0.442%)
761 active+clean
6 active+remapped
1 active+clean+scrubbing
client io 13476 B/s rd, 654 kB/s wr, 95 op/s

Why pgs that are in state 'stuck unclean'?

--
dott. Marco Gaiarin GNUPG Key ID: 240A3D66
Associazione ``La Nostra Famiglia'' http://www.lanostrafamiglia.it/
Polo FVG - Via della Bontà, 7 - 33078 - San Vito al Tagliamento (PN)
marco.gaiarin(at)lanostrafamiglia.it t +39-0434-842711 f +39-0434-842797

Dona il 5 PER MILLE a LA NOSTRA FAMIGLIA!
http://www.lanostrafamiglia.it/index.php/it/sostienici/5x1000
(cf 00307430132, categoria ONLUS oppure RICERCA SANITARIA)
Paweł Sadowski
2018-11-22 21:09:34 UTC
Permalink
On 11/22/18 6:12 PM, Marco Gaiarin wrote:
> Mandi! Paweł Sadowsk
> In chel di` si favelave...
>
>> From your osd tree it looks like you used 'ceph osd reweight'.
> Yes, and i supposed also to do the right things!
>
> Now, i've tried to lower the to-dimissi OSD, using:
> ceph osd reweight 2 0.95
>
> leading to an osd map tree like:
>
> ***@blackpanther:~# ceph osd tree
> ID WEIGHT TYPE NAME UP/DOWN REWEIGHT PRIMARY-AFFINITY
> -1 21.83984 root default
> -2 5.45996 host capitanamerica
> 0 1.81999 osd.0 up 1.00000 1.00000
> 1 1.81999 osd.1 up 1.00000 1.00000
> 10 0.90999 osd.10 up 1.00000 1.00000
> 11 0.90999 osd.11 up 1.00000 1.00000
> -3 5.45996 host vedovanera
> 2 1.81999 osd.2 up 0.95000 1.00000
> 3 1.81999 osd.3 up 1.00000 1.00000
> 4 0.90999 osd.4 up 1.00000 1.00000
> 5 0.90999 osd.5 up 1.00000 1.00000
> -4 5.45996 host deadpool
> 6 1.81999 osd.6 up 1.00000 1.00000
> 7 1.81999 osd.7 up 1.00000 1.00000
> 8 0.90999 osd.8 up 1.00000 1.00000
> 9 0.90999 osd.9 up 1.00000 1.00000
> -5 5.45996 host blackpanther
> 12 1.81999 osd.12 up 0.04999 1.00000
> 13 1.81999 osd.13 up 0.04999 1.00000
> 14 0.90999 osd.14 up 0.04999 1.00000
> 15 0.90999 osd.15 up 0.04999 1.00000
>
> and, after rebalancing, to:
>
> ***@blackpanther:~# ceph -s
> cluster 8794c124-c2ec-4e81-8631-742992159bd6
> health HEALTH_WARN
> 6 pgs stuck unclean
> recovery 4/2550363 objects degraded (0.000%)
> recovery 11282/2550363 objects misplaced (0.442%)
> monmap e6: 6 mons at {0=10.27.251.7:6789/0,1=10.27.251.8:6789/0,2=10.27.251.11:6789/0,3=10.27.251.12:6789/0,4=10.27.251.9:6789/0,blackpanther=10.27.251.2:6789/0}
> election epoch 2750, quorum 0,1,2,3,4,5 blackpanther,0,1,4,2,3
> osdmap e7300: 16 osds: 16 up, 16 in; 6 remapped pgs
> pgmap v54737590: 768 pgs, 3 pools, 3299 GB data, 830 kobjects
> 9870 GB used, 12474 GB / 22344 GB avail
> 4/2550363 objects degraded (0.000%)
> 11282/2550363 objects misplaced (0.442%)
> 761 active+clean
> 6 active+remapped
> 1 active+clean+scrubbing
> client io 13476 B/s rd, 654 kB/s wr, 95 op/s
>
> Why pgs that are in state 'stuck unclean'?
>
This is most probably due to big difference in weights between your
hosts (the new one has 20x lower weight than the old ones) which in
combination with straw algorithm is a 'known' issue. You could try to
increase *choose_total_tries* in your crush map from 50 to some bigger
number. The best IMO would be to use straw2 (which will cause some
rebalance) and then use 'ceph osd crush reweight' (instead of 'ceph osd
reweight') with small steps to slowly rebalance data onto new OSDs.

--
PS
Marco Gaiarin
2018-11-23 10:12:18 UTC
Permalink
Mandi! Paweł Sadowski
In chel di` si favelave...

> This is most probably due to big difference in weights between your hosts (the
> new one has 20x lower weight than the old ones) which in combination with straw
> algorithm is a 'known' issue.

Ok. I've reweighted back that disk to '1' and status goes back to
HEALTH_OK.


> You could try to increase choose_total_tries in
> your crush map from 50 to some bigger number. The best IMO would be to use
> straw2 (which will cause some rebalance) and then use 'ceph osd crush reweight'
> (instead of 'ceph osd reweight') with small steps to slowly rebalance data onto
> new OSDs.

For now i'm putting in the new disks with 'ceph osd reweight',
probably when i'm on 50% of new disks i'll start to use 'ceph osd crush reweight'
against the old one.

Thanks.

--
dott. Marco Gaiarin GNUPG Key ID: 240A3D66
Associazione ``La Nostra Famiglia'' http://www.lanostrafamiglia.it/
Polo FVG - Via della Bontà, 7 - 33078 - San Vito al Tagliamento (PN)
marco.gaiarin(at)lanostrafamiglia.it t +39-0434-842711 f +39-0434-842797

Dona il 5 PER MILLE a LA NOSTRA FAMIGLIA!
http://www.lanostrafamiglia.it/index.php/it/sostienici/5x1000
(cf 00307430132, categoria ONLUS oppure RICERCA SANITARIA)
Marco Gaiarin
2018-11-23 10:07:53 UTC
Permalink
Mandi! Paweł Sadowsk
In chel di` si favelave...

> Exactly, your 'new' OSD have weight 1.81999 (osd.12, osd.13) and 0.90999
> (osd.14, osd.15). As Jarek pointed out you should add them using
> 'osd crush initial weight = 0'
> and the use
> 'ceph osd crush reweight osd.x 0.05'
> to slowly increase weight on them.
> From your osd tree it looks like you used 'ceph osd reweight'.

Reading ceph docs lead to me that 'ceph osd reweight' and 'ceph osd crush
reweight' was roughly the same, the first is effectively 'temporary'
and expressed in percentage (0-1), while the second is 'permanent' and
expressed, normally, as disk terabyte.

You are saying that insted the first modify only the disk occupation,
while only the latter alter the crush map.

Right?


This is true only for 'straw' algorithm? Or is general? Thanks.

--
dott. Marco Gaiarin GNUPG Key ID: 240A3D66
Associazione ``La Nostra Famiglia'' http://www.lanostrafamiglia.it/
Polo FVG - Via della Bontà, 7 - 33078 - San Vito al Tagliamento (PN)
marco.gaiarin(at)lanostrafamiglia.it t +39-0434-842711 f +39-0434-842797

Dona il 5 PER MILLE a LA NOSTRA FAMIGLIA!
http://www.lanostrafamiglia.it/index.php/it/sostienici/5x1000
(cf 00307430132, categoria ONLUS oppure RICERCA SANITARIA)
Janne Johansson
2018-11-23 11:48:18 UTC
Permalink
Den fre 23 nov. 2018 kl 11:08 skrev Marco Gaiarin <***@sv.lnf.it>:

> Reading ceph docs lead to me that 'ceph osd reweight' and 'ceph osd crush
> reweight' was roughly the same, the first is effectively 'temporary'
> and expressed in percentage (0-1), while the second is 'permanent' and
> expressed, normally, as disk terabyte.
>
> You are saying that insted the first modify only the disk occupation,
> while only the latter alter the crush map.

The crush weight tells the cluster how much this disk adds to the
capacity of the host it is attached to,
the OSD weight says (from 0 to 1) how much of the advertized size it
actually wants to receive/handle.
If you add crush weight, data will flow to the node, but if it has low
OSD weight, the other OSDs on the
host will have to bear the extra data. So starting out with 0 for
crush and 1.0 for OSD weight is fine, it
will not cause data movement, until you start (slowly perhaps) to add
to the crush weight until it matches
the size of the disk.

--
May the most significant bit of your life be positive.
Matthew H
2018-11-23 16:04:36 UTC
Permalink
Greetings,

You need to set the following configuration option under [osd] in your ceph.conf file for your new OSDs.

[osd]
osd_crush_initial_weight = 0

This will ensure your new OSDs come up with a 0 crush weight, thus preventing the automatic rebalance that you see occuring.

Good luck,

________________________________
From: ceph-users <ceph-users-***@lists.ceph.com> on behalf of Marco Gaiarin <***@sv.lnf.it>
Sent: Thursday, November 22, 2018 3:22 AM
To: ceph-***@ceph.com
Subject: [ceph-users] New OSD with weight 0, rebalance still happen...


Ceph still surprise me, when i'm sure i've fully understood it,
something 'strange' (to my knowledge) happen.


I need to move out a server of my ceph hammer cluster (3 nodes, 4 OSD
per node), and for some reasons i cannot simply move disks.
So i've added a new node, and yesterday i've setup the new 4 OSD.
In my mind i will add 4 OSD with weight 0, and then slowly i will lower
the old OSD weight and increase the weight of the new.

I've done before:

ceph osd set noin

and then added OSD, and (as expected) new OSD start with weight 0.

But, despite of the fact that weight is zero, rebalance happen, and
using percentage of rebalance 'weighted' to the size of new disk (eg,
i've had 18TB circa of space, i've added a 2TB disks and roughly 10% of
data start to rebalance).


Why? Thanks.

--
dott. Marco Gaiarin GNUPG Key ID: 240A3D66
Associazione ``La Nostra Famiglia'' http://www.lanostrafamiglia.it/
Polo FVG - Via della Bontà, 7 - 33078 - San Vito al Tagliamento (PN)
marco.gaiarin(at)lanostrafamiglia.it t +39-0434-842711 f +39-0434-842797

Dona il 5 PER MILLE a LA NOSTRA FAMIGLIA!
http://www.lanostrafamiglia.it/index.php/it/sostienici/5x1000
(cf 00307430132, categoria ONLUS oppure RICERCA SANITARIA)
Loading...